WO2018072271A1 - 一种图像显示优化方法及装置 - Google Patents

一种图像显示优化方法及装置 Download PDF

Info

Publication number
WO2018072271A1
WO2018072271A1 PCT/CN2016/108730 CN2016108730W WO2018072271A1 WO 2018072271 A1 WO2018072271 A1 WO 2018072271A1 CN 2016108730 W CN2016108730 W CN 2016108730W WO 2018072271 A1 WO2018072271 A1 WO 2018072271A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene
information
displayed
display
Prior art date
Application number
PCT/CN2016/108730
Other languages
English (en)
French (fr)
Inventor
黄帅
王世通
葛建强
邹志煌
王妙锋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680080589.8A priority Critical patent/CN108701439B/zh
Priority to US16/342,451 priority patent/US10847073B2/en
Publication of WO2018072271A1 publication Critical patent/WO2018072271A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image display optimization method and apparatus.
  • the image display optimization is to adjust the display parameters of the terminal device screen according to the content of the image to be displayed, thereby enabling the user to obtain an optimal viewing experience for the image to be displayed.
  • the process of optimizing the display on the terminal device is specifically: after the user clicks on the image thumbnail in the image library, the screen of the terminal device loads the image to enlarge the transition animation until the final image is completely displayed on the screen.
  • the display optimization module of the terminal device analyzes the gray histogram, color histogram or sensitivity (ISO) value of the image in the background to obtain an analysis result, and then adjusts the terminal device according to the analysis result.
  • the display parameters of the screen but these basic information can not accurately reflect the content of the image, which leads to the relevant optimization algorithm to generate incorrect screen adjustment parameters, affecting the final display effect. For example, for night scene images and backlit images, although both of them have a large number of pixels distributed in the dark portion of the gray histogram, the dark portion details of the photographed subject should be highlighted for the backlit image, but not for the night scene image.
  • the scene information (such as night scene and backlight) is analyzed in the image analysis section, so that the image content information acquired is more accurate and detailed when the display is optimized. But this will increase the computational complexity of the image analysis process.
  • the time taken by the terminal device from receiving the thumbnail command to the complete display of the image is between 500ms and 800ms. If the scene recognition algorithm is added to obtain the scene information at this point, the operation time of the image is greatly increased. Give the user a working experience of Caton.
  • the embodiment of the present invention provides an image display optimization method, device, and terminal, which are used to solve the problem that the image display takes a long time in the case of increasing scene information analysis in the prior art.
  • the present application provides an image display optimization method, which can be applied to a terminal device, which can include a camera.
  • the method includes:
  • the scenes that may be involved in the embodiments of the present application may include night scenes, backlighting, blue sky, green plants, food, portraits, cloudy, grass, cities, and the like.
  • the display optimization may include one or more combinations of the following operations: color display optimization, contrast display optimization, and definition display optimization.
  • an image analysis link is added to analyze the scene information (such as night scene, backlight), and then the scene information is written into the image, so that when the display is optimized, the image is acquired.
  • the scene information included in the image content information optimizes the display of the image, so that the displayed image is more accurate and detailed, and the scene recognition related operation is avoided when the image is opened, thereby improving the smoothness of the image browsing.
  • the optimization algorithm provided by the embodiment of the present application can implement the transmission of the scene information at the shooting end and the display end without performing any hardware modification on the terminal device.
  • the scene information is written to the image to be displayed by:
  • the preview screen is displayed on the terminal device, wherein the preview screen is a screen collected by the camera;
  • Determining information in the preview screen according to information acquired by at least one of a light sensor, a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, or a lens pointing angle sensor, and image information in the preview screen The scene information to which the image belongs;
  • the image to be displayed Upon receiving the image capturing instruction, the image to be displayed is captured, and the obtained scene information is written into the image to be displayed.
  • the scene information is identified by using the photo preview link, and more sensor information can be obtained to improve the accuracy of the discrimination result.
  • the determining the scene information to which the image in the preview picture belongs according to the information acquired by the at least one sensor and the image information in the preview picture includes:
  • the information acquired by the at least one sensor and the image information in the preview picture are used by the predefined N scene classifiers before receiving the shooting instruction.
  • the scene to which the image in the preview image belongs is classified, and the scene type output by each scene classifier is obtained; where N is a positive integer not less than 2;
  • the scene information corresponding to the scene type with the most output times is determined as the scene information of the image to be displayed.
  • the discrimination result of the multi-frame preview picture is used for fusion, and the robustness of the scene recognition algorithm is further improved.
  • the information acquired by the at least one sensor and the image information in the preview picture are related to a scene to which the image in the preview picture belongs by using a predefined N scene classifiers Classification, including:
  • the image of every preset number of frames in the preview screen after receiving the open command, performs the following operations:
  • a scene classifier selected based on the configuration order is used for the present The scene to which the image in the preview screen belongs is classified when the preset number of frames is reached.
  • the writing the obtained scene information into the image to be displayed includes:
  • the MakerNote field in the EXIF area is used to store the scene information, and the scene information transmission path of the shooting end and the display end is pulled, without changing or adding hardware. Display optimization through scene information.
  • the display optimization of the image to be displayed based on the scene information written in the image to be displayed includes:
  • the embodiment of the present application further provides an image display optimization device, where the device is applied to a terminal device including a camera, including:
  • a receiving module configured to receive a display instruction triggered by the user for displaying an image to be displayed
  • a scene recognition module configured to identify scene information included in the image to be displayed when the display instruction is received by the receiving module, where the scene information is written when the image to be displayed is captured by the camera Describe the image to be displayed;
  • a display optimization module configured to perform display optimization on the image to be displayed according to the scene information that is recognized by the scene recognition module
  • a display module configured to display the to-be-displayed image after being optimized by the display optimization module.
  • the display module is further configured to: after the receiving module receives the opening instruction of the camera, display a preview image, where the preview image is a screen collected by the camera;
  • the device also includes:
  • the scene information writing module is configured to write the scene information into the image to be displayed by:
  • GPS sensor GPS sensor, infrared sensor, magnetic sensor, air pressure sensor, Determining scene information to which the image in the preview picture belongs, information acquired by at least one of a laser sensor or a lens pointing angle sensor, and image information in the preview picture displayed by the display module;
  • the receiving module When the receiving module receives the image capturing instruction, the image to be displayed is captured, and the obtained scene information is written into the image to be displayed.
  • the scene information writing module determines, according to the information acquired by the at least one sensor and the image information in the preview picture, scene information to which the image in the preview picture belongs When specifically used to:
  • the information acquired by the at least one sensor and the image in the preview image by the predefined N scene classifiers after the receiving module receives the opening instruction and before receiving the shooting instruction The information is classified into a scene to which the image in the preview picture belongs, and the scene type output by each scene classifier is obtained; where N is a positive integer not less than 2;
  • the scene information corresponding to the scene type with the most output times is determined as the scene information of the image to be displayed.
  • the scene information is written by the module, and the preview image is obtained according to the information acquired by the at least one sensor and the preset number of frames when passing through the predefined N scene classifiers.
  • the image information in the image is classified according to the scene to which the image in the preview picture belongs, it is specifically used to:
  • the image of every preset number of frames in the preview screen after receiving the open command by the receiving module, performs the following operations:
  • a scene classifier selected based on the configuration order is used for the present The scene to which the image in the preview screen belongs is classified when the preset number of frames is reached.
  • the scene information writing module is configured to write the obtained scene information into the exchangeable image of the image when the obtained scene information is written into the image to be displayed.
  • the vendor's comment in the EXIF data area is in the MakerNote field.
  • the display optimization module is specifically configured to:
  • the embodiment of the present application further provides a terminal, including:
  • a processor configured to identify scene information included in the image to be displayed when a user-triggered display instruction for displaying the image to be displayed is received, wherein the scene information is that the camera is to be displayed at the camera Writing an image to be displayed when the image is being imaged; performing display optimization on the image to be displayed according to the identified scene information;
  • a display for displaying the image to be displayed after being optimized by display.
  • the processor can include one or more general purpose processors.
  • the display can be a liquid crystal display (English: Liquid Crystal Display, referred to as: LCD) or OLED (English: Organic Light-Emitting Diode, referred to as: organic light-emitting diode).
  • the processor is further configured to: when receiving an opening instruction of the camera, instruct the display to display a preview image, where the preview image is a picture collected by the camera;
  • the display is further configured to display the preview screen
  • the processor is further configured to: acquire information according to at least one of a light sensor, a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, or a lens pointing angle sensor, and image information in the preview image And determining scene information to which the image in the preview picture belongs; when receiving the image capturing instruction, capturing the image to be displayed, and writing the obtained scene information into the image to be displayed.
  • the processor when determining the scene information to which the image in the preview picture belongs according to the information acquired by the at least one sensor and the image information in the preview picture, Used for:
  • N is a positive integer not less than 2;
  • the scene information corresponding to the scene type with the most output times is determined as the scene information of the image to be displayed.
  • the processor is configured by the predefined N scene classifiers according to the information acquired by the at least one sensor and the image information in the preview picture for the preview picture.
  • the processor is specifically used to:
  • the image of every preset number of frames in the preview screen after receiving the open command, performs the following operations:
  • a scene classifier selected based on the configuration order is used for the present The scene to which the image in the preview screen belongs is classified when the preset number of frames is reached.
  • the processor when writing the obtained scene information into an image to be displayed, is specifically used to:
  • the obtained scene information is written in the vendor comment MakerNote field of the EXIF data area of the exchangeable image file of the image.
  • the processor when performing display optimization on the image to be displayed based on the scene information written in the image to be displayed, is specifically used to:
  • FIG. 1 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • FIG. 2 is a flowchart of an image display optimization method according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for writing scene information according to an embodiment of the present application
  • V in and V out are schematic diagrams of adjustment curves of V in and V out according to an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for writing image information of scene information according to an embodiment of the present disclosure
  • FIG. 5B is a schematic diagram of a method for writing image information of scene information according to an embodiment of the present disclosure
  • 6A is a flowchart of a display optimization method according to an embodiment of the present application.
  • FIG. 6B is a schematic diagram of a display optimization method according to an embodiment of the present application.
  • FIG. 7A is a schematic diagram of a display optimization apparatus provided by an embodiment of the present application.
  • FIG. 7B is a schematic diagram of a display optimization device implemented by the display optimization apparatus according to an embodiment of the present application.
  • An embodiment of the present invention provides an image display optimization method and apparatus for solving the problem that the image display takes a long time in the case of increasing scene information analysis in the prior art.
  • the method and the device are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
  • the image display optimization scheme of the embodiments of the present application may be implemented using various electronic devices having a photographing function and capable of being used for display, including but not limited to a personal computer, a server computer, a handheld or laptop device, and a mobile device. (such as mobile phones, tablets, personal digital assistants, media players, etc.), consumer electronics, small computers, mainframe computers, and so on.
  • the electronic device is preferably an intelligent mobile terminal.
  • the solution provided by the embodiment of the present application is specifically described below by taking an intelligent mobile terminal as an example.
  • the terminal 100 includes a display device 110, a processor 120, and a memory 130.
  • the memory 130 can be used to store software programs and data, and the processor 120 executes various functional applications and data processing of the terminal 100 by running software programs and data stored in the memory 130.
  • the memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as an image display optimization function, a scene classifier function, etc.), and the like; Data (such as audio data, phone book, exchangeable image file EXIF, etc.) created according to the use of the terminal 100 is stored.
  • memory 130 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 120 is a control center of the terminal 100, and connects various parts of the entire terminal by various interfaces and lines, and executes various functions and processing data of the terminal 100 by running or executing software programs and/or data stored in the memory 130. Therefore, the terminal is monitored as a whole.
  • the processor 120 may include one or more general-purpose processors, and may also include one or more DSPs (Digital Signal Processors) for performing related operations to implement the technical solutions provided by the embodiments of the present application.
  • DSPs Digital Signal Processors
  • the terminal 100 may further include an input device 140 for receiving input digital information, character information or contact touch/contactless gestures, and generating signal inputs related to user settings and function control of the terminal 100, and the like.
  • the input device 140 may include a touch panel 141.
  • the touch panel 141 also referred to as a touch screen, can collect touch operations on or near the user (such as the user's operation on the touch panel 141 or on the touch panel 141 using any suitable object or accessory such as a finger, a stylus, or the like. ), and drive the corresponding connection device according to a preset program.
  • the touch panel 141 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 120 is provided and can receive commands from the processor 120 and execute them. For example, the user clicks a picture with a finger on the touch panel 141, the touch detection device detects the signal brought by the click, and then transmits the signal to the touch controller, and the touch controller converts the signal.
  • the coordinates are sent to the processor 120, and the processor 120 is based on the coordinates and the class of the signal.
  • Type click or double-click to determine the operation performed on the image (such as image enlargement, full-screen image display), and then determine the memory space required to perform the operation. If the memory space required is less than the free memory, then The interface after the application is launched is displayed on the display panel 111 included in the display device in full screen, thereby implementing application startup.
  • the touch panel 141 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input device 140 may further include other input devices 142, which may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like. One or more of them.
  • the display device 110 includes a display panel 111 for displaying information input by the user or information provided to the user, and various menu interfaces of the terminal device 100, etc., which are mainly used for displaying images in the terminal 100 in the embodiment of the present application.
  • the display panel can be configured by using a liquid crystal display (English: Liquid Crystal Display, LCD for short) or OLED (Organic Light-Emitting Diode).
  • the touch panel 141 can cover the display panel 111 to form a touch display screen.
  • the terminal 100 may further include a power source 150 for powering other modules and a camera 160 for taking photos or videos.
  • the terminal 100 may also include one or more sensors 170, such as an acceleration sensor, a light sensor, a GPS sensor, an infrared sensor, a laser sensor, a position sensor or a lens pointing angle sensor, and the like.
  • the terminal 100 may further include a radio frequency (RF) circuit 180 for performing network communication with the wireless network device, and may further include a WiFi module 190 for performing WiFi communication with other devices.
  • RF radio frequency
  • the image display optimization method provided by the embodiment of the present application may be implemented in the storage software program shown in FIG. 1 , and may be specifically executed by the processor 120 of the terminal device 100 , and the terminal device may include a camera. Specifically, as shown in FIG. 2, the image display optimization method provided by the embodiment of the present application includes:
  • the scenes that may be involved in the embodiments of the present application may include night scenes, backlighting, blue sky, green plants, food, portraits, cloudy, grass, cities, and the like.
  • the display instruction in the embodiment of the present application may be an instruction triggered by the user clicking the thumbnail, or an instruction triggered by the left and right sliding gesture when the user views the image, or may be an instruction triggered by the up and down sliding gesture when the user views the image, or the user clicks.
  • the image identifies the triggered command, or the print preview instruction that the user triggers printing, or the image sharing preview instruction that the user triggers image sharing, or the display instruction of the screen save image, and the like.
  • the image identification can be an image name or an image ID, and the like.
  • S220 Perform display optimization on the image to be displayed according to the identified scene information.
  • the display optimization may include one or more combinations of the following operations: color display optimization, contrast display optimization, and definition display optimization.
  • an image analysis link is added to analyze the scene information (such as night scene, backlight), and then the scene information is written into the image, so that when the display is optimized, the image is acquired.
  • the scene information included in the image content information optimizes the display of the image, so that the displayed image is more accurate and detailed, and the scene recognition related operation is avoided when the image is opened, thereby improving the smoothness of the image browsing.
  • the optimization algorithm provided by the embodiment of the present application can implement the transmission of the scene information at the shooting end and the display end without performing any hardware modification on the terminal device.
  • scenario information related to the embodiment of the present application may be written into the image to be displayed by using the following manner, as shown in FIG. 3:
  • the preview screen is a screen collected by the camera.
  • S320 Determine the preview image according to information acquired by at least one of a light sensor, a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, or a lens pointing angle sensor, and image information in the preview image.
  • a light sensor a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, or a lens pointing angle sensor
  • image information in the preview image acquired by at least one of a light sensor, a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, or a lens pointing angle sensor, and image information in the preview image.
  • a multi-frame image is displayed in the preview screen, so that the scene classification to which the multi-frame image in the preview image belongs can be classified, thereby Get scene information.
  • the highest resolution of the last few frames in the above period of time is taken as the captured image, that is, the image to be displayed, and then the scene information previously recognized in the preview picture stage is written into the captured image. image.
  • the scene information when the scene information is written into the image to be displayed, the scene information may be written into the EXIF data area of the exchangeable image file (English: Exchangeable Image File, EXIF). In the (MakerNote) field.
  • the scene information can also be written in other data areas of the image, or in other fields of the EXIF data area.
  • the format of the image to be displayed may be a JEPG format or a tag image file format (Tag Image File Format, TIFF for short).
  • the EXIF data area of the exchangeable image file written by the scene information is specifically designated for the digital photographing device, and is used to record the attribute information of the digital photo and the area of the photographed data.
  • the EXIF standard was developed by the JEIDA organization and widely adopted by various digital camera equipment scenes. A large amount of data is stored in the EXIF data area, and each piece of data is stored in the data area in the form of an entry, and the respective functions and IDs are specified.
  • the Maker Note field is specified in the EXIF standard for recording device-related information about the device manufacturer with an ID of 0x927c. In the embodiment of the present application, the scene information is recorded in the Maker Note field, and there is no need to change the format of the EXIF data area.
  • the determining, according to the information acquired by the at least one sensor, and the image information in the preview image, the scenario information to which the image in the preview image belongs may be implemented as follows:
  • A1 after receiving the opening instruction, before the receiving the shooting instruction, according to the information acquired by the at least one sensor and the image information in the preview picture by using a predefined N scene classifiers Sorting the scene to which the image in the preview image belongs, and acquiring each field The type of scene output by the scene classifier.
  • N is a positive integer not less than 2.
  • the terminal device is provided with a plurality of sensors, such as a light sensor, a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, a lens pointing angle sensor, etc., parameters collected by the plurality of sensors and the current preview image.
  • the content of the image is input to the scene classifier to output the classification result to obtain the scene type.
  • Light sensors are used to measure the light in the environment in which the terminal is located.
  • a light sensor can be used to distinguish between highlight scenes and low light scenes.
  • GPS sensors are used to measure the location of the terminal equipment.
  • the infrared sensor measures the distance by receiving the intensity of the infrared rays.
  • GPS sensors can be used to distinguish scenes that include buildings or scenes that do not include buildings; GPS sensors can also be used to distinguish between seascapes and snowscapes, and so on.
  • the air pressure sensor is used to measure atmospheric pressure, and the air pressure sensor can be used to distinguish between high altitude scenes and non-plateau scenes.
  • Air pressure sensors or magnetic sensors or GPS sensors or light sensors or laser sensors can also be used to distinguish between indoor scenes and outdoor scenes.
  • the lens pointing angle sensor is used to measure the angle of the lens with the horizontal direction.
  • the lens pointing angle sensor can be used to distinguish between self-timer scenes and non-self-timer scenes.
  • the lens pointing angle sensor can also be used to distinguish between blue sky scenes and green plant scenes.
  • A2 The scene information corresponding to the scene type with the most output times is determined as the scene information of the image to be displayed.
  • the image of every preset number of frames in the preview screen after receiving the open command, performs the following operations:
  • a scene classifier selected based on the configuration order is used for the present The scene to which the image in the preview screen belongs is classified when the preset number of frames is reached.
  • the image of every preset number of frames in the preview screen After receiving the open command, the image of every preset number of frames in the preview screen performs the following operations:
  • the N scene classifiers are used to achieve the current The scene to which the image in the preview screen belongs is classified according to the number of frames.
  • the first possible implementation manner does not require scene recognition for each frame of image, and every When the number of frames is preset, a scene classifier is sequentially used for scene classification and recognition, thereby saving time and reducing the use of resources.
  • the following may be specifically implemented by:
  • display optimization may include one or more combinations of the following operations: color display optimization, contrast display optimization, and definition display optimization. Therefore, color display optimization, contrast display optimization, and sharpness display optimization correspond to one LUT table, respectively.
  • the embodiment of the present application is only described by taking the contrast display optimization as an example, and the other two are not described again.
  • Z 1 and Z 2 respectively represent optimization parameters
  • Z night 1 represents the Z 1 value in the night scene
  • Z night 2 represents the Z 2 in the night scene
  • Z backllight 1 represents the Z 1 value in the backlight scene
  • Z backllight 2 represents the Z 2 value in the backlight scene
  • Z sky 1 represents the Z 1 value in the blue sky scene
  • Z sky 2 represents the blue sky scene Z 2 value
  • Z foliage 1 represents the Z 1 value in the backlight scene
  • Z backllight represents the Z foliage, 2 value in the backlight scene.
  • the contrast display optimization is essentially a piecewise gamma correction of the V component of the image in the chrominance (H) saturation (S) luminance (V) color space to achieve contrast enhancement. Its mathematical model can be summarized as:
  • V in represents the input image pixel value
  • V out represents the output image pixel value
  • Gamma gamma] represents the correction factor
  • A denotes a constant factor.
  • V in and V out are as shown in Fig. 4.
  • V in ⁇ Z 1 , ⁇ > 1 the region where the luminance is low
  • V in > Z 2 the region where the luminance is relatively high
  • the pixel value of the bright area in the image is raised.
  • the adjusted image has a large dynamic range in areas where high brightness is ensured and areas where brightness is low.
  • Z 1 and Z 2 in the embodiment of the present application needs to be adjusted according to the scene information of the image.
  • the night scene image has a large number of dark pixels, so Z 1 should choose a lower value to avoid the dark part detail loss caused by lowering the gray value of the dark part pixel; at the same time, the Z 2 value is slightly lower but larger than Z 1 , and the light is increased appropriately.
  • Department dynamic range The backlight image has a partial dark pixel point and a large number of bright pixel points in the background area. Therefore, for the backlight scene, a higher Z 1 value should be selected to enhance the detail of the main part of the photograph, and a higher Z 2 value is selected. Avoid overexposure in bright areas. Different parameters are adjusted for the images corresponding to different scene information, and finally the LUT table shown in Table 1 is formed.
  • FIG. 5A and FIG. 5B Schematic diagram of the method for writing scene information into an image at the shooting end:
  • S510 The user triggers to turn on the camera, so that the terminal device starts the camera after receiving the opening command generated by the user trigger.
  • S520 The terminal device starts to acquire an image that the user is about to take in real time before the terminal device starts displaying the preview screen until receiving the user triggering the shooting instruction.
  • the terminal device sequentially uses the night scene classifier and the backlight classifier to classify the scene to which the image in the preview picture belongs.
  • the information acquired by at least one of the light sensor, the GPS sensor, the infrared sensor, the magnetic sensor, the laser sensor, the air pressure sensor, or the lens pointing angle sensor and the image information in the preview image are input into the night scene classifier or back light classification.
  • the information acquired by at least one of the light sensor, the GPS sensor, the infrared sensor, the magnetic sensor, the laser sensor, the air pressure sensor, or the lens pointing angle sensor and the image information in the preview image are input into the night scene classifier or back light classification. Device.
  • the preset number of frames is 4 frames. A total of 50 frames of images appear during the presentation of the preview screen on the display interface. Therefore, when the 8th M-7 frame is reached, the night scene classifier is used to classify the scene to which the 8th M-7 frame image belongs.
  • the terminal device When receiving the shooting instruction triggered by the user, the terminal device captures an image to be displayed, and counts the number of times of the scene scene and the backlight scene, and selects the scene information corresponding to the scene type with the most selection as the user-triggered shooting instruction image to be displayed. Scene information.
  • the terminal device writes the image to be displayed into the data area of the image, and writes the scene information into the Maker Note field of the EXIF data area.
  • Figure 6A and Figure 6B show the flow chart of the method for display optimization:
  • S610 Receive a user-triggered display instruction for displaying an image. For example, the user clicks on the display command triggered by the image thumbnail in the gallery grid interface.
  • S620 Perform scene analysis on the image file that the user clicks, and parse the scene information stored in the Maker Note.
  • the embodiment of the present application further provides an image display optimization device, which may be applied to a terminal device, where the device may be disposed in the terminal device or may be configured by the terminal device.
  • the apparatus may include a display end and a photographing end.
  • the display end includes a receiving module 710, a scene recognition module 720, a display optimization module 730, and a display module 740.
  • the shooting end can write the scene information to the module 750, and can also include a camera.
  • the camera can be an external device.
  • the receiving module 710 is configured to receive a display instruction triggered by the user for displaying an image to be displayed;
  • the scene recognition module 720 is configured to identify scene information included in the image to be displayed, where the scene information is written to the image to be displayed when the image to be displayed is captured by a camera.
  • the display optimization module 730 is configured to perform display optimization on the image to be displayed according to the scene information of the image to be displayed that is recognized by the scene recognition module 720.
  • the display module 740 is configured to display the optimized image to be displayed after being displayed by the display optimization module 730.
  • the display optimization module 730 can include a color display optimization module 731, a contrast display optimization module 732, and a sharpness display optimization module 733.
  • the color display optimization module 731 is used for display optimization of the color of the image to be displayed
  • the contrast display optimization module 732 is used for display optimization of the contrast of the image to be displayed
  • the sharpness display optimization module 733 is used for display optimization of the sharpness of the image to be displayed.
  • the display module 740 is further configured to: after the receiving module 710 receives the opening instruction of the camera, display a preview screen, where the preview screen is a screen collected by the camera.
  • the device may further include: a scene information writing module 750, configured to write the scene information into the image to be displayed by:
  • Determining information according to at least one of a light sensor, a GPS sensor, an infrared sensor, a magnetic sensor, a barometric pressure sensor, a laser sensor, or a lens pointing angle sensor, and image information in the preview image displayed by the display module Scene information to which the image in the preview picture belongs;
  • the receiving module When the receiving module receives the image capturing instruction, the image to be displayed is captured, and the obtained scene information is written into the image to be displayed.
  • the scene information writing module 750 when determining the scene information to which the image in the preview screen belongs according to the information acquired by the at least one sensor and the image information in the preview screen, specifically Used for:
  • the receiving module After the receiving module receives the opening instruction and before receiving the shooting instruction, the information acquired by the at least one sensor and the image information in the preview picture are passed through a predefined N scene classifiers And classifying a scene to which the image in the preview picture belongs, and acquiring a scene type output by each scene classifier; wherein N is a positive integer not less than 2;
  • the scene information corresponding to the scene type with the most output times is determined as the scene information of the image to be displayed.
  • the scene information writing module 750 when determining the scene information to which the image in the preview screen belongs according to the information acquired by the at least one sensor and the image information in the preview screen, specifically Used for:
  • the image of every preset number of frames in the preview screen after receiving the open command, performs the following operations:
  • a scene classifier selected based on the configuration order is used for The scene to which the image in the preview screen belongs is classified when the preset number of frames is reached.
  • the scene information writing module 750 is configured to write the obtained scene information into the exchangeable image file EXIF data of the image when the obtained scene information is written into the image to be displayed.
  • the vendor of the region is annotated in the MakerNote field.
  • the display optimization module 740 is specifically configured to:
  • the scene information is input to the color display optimization module 731 to obtain a color parameter
  • the scene information is input to the contrast display optimization module 732 to obtain a contrast parameter
  • the scene information is input to the sharpness display optimization module 733 to obtain a sharpness parameter, as shown in FIG. 7B. Show.
  • the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种图像显示优化方法、装置及终端,用以解决现有技术中存在的在增加场景信息分析的情况下,图像显示耗时大的问题。所述方法包括:接收到用户触发的用于显示待显示图像的显示指令时,识别所述待显示图像包括的场景信息,其中,所述场景信息是在所述摄像头拍摄所述待显示图像时写入所述待显示图像的;根据识别出的所述场景信息,对所述待显示图像进行显示优化;显示经过显示优化后的所述待显示图像。

Description

一种图像显示优化方法及装置
本申请要求在2016年10月17日提交中国专利局、申请号为201610905784.1、发明名称为“一种图片显示方法和终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像显示优化方法及装置。
背景技术
图像显示优化是根据待显示图像的内容对终端设备屏幕的显示参数进行调整,从而使得用户能够针对待显示图像获得最佳观赏体验。
目前终端设备上对图像进行显示优化的流程具体是:用户在图像库中点击图像缩略图之后,终端设备的屏幕加载图像放大过渡动画,直至最终图像在屏幕上完全显示。在屏幕加载图像放大的过渡动画过程中,终端设备的显示优化模块在后台对图像的灰度直方图、色彩直方图或者感光度(ISO)值进行分析得到分析结果,然后根据分析结果调整终端设备屏幕的显示参数,但是这些基本信息并不能准确地反映图像的内容,进而导致相关的优化算法产生错误的屏幕调整参数,影响最终显示效果。例如,对于夜景图像和逆光图像,虽然二者在灰度直方图的暗部区域均分布着大量的像素点,但是对于逆光图像应该凸显拍照主体的暗部细节,则针对夜景图像则不需要。
为了提高显示效果,在图像分析环节来分析场景信息(如夜景、逆光),这样在显示优化时,获取到的图像内容信息就更加准确、细致。但是这样会增加图像分析环节的计算复杂度。通常情况下,终端设备从接收到点击缩略图指令到实现图像完全显示的耗费时间在500ms到800ms之间,如果在这个环节加入场景识别算法来获取场景信息,会大大增加图像上述操作耗时,给用户造成卡顿的操作体验。
发明内容
本申请实施例提供了一种图像显示优化方法、装置及终端,用以解决现有技术中存在的在增加场景信息分析的情况下,图像显示耗时大的问题。
第一方面,本申请提供了一种图像显示优化方法,所述方法可以应用于终端设备,该终端设备可以包括摄像头。所述方法包括:
接收到用户触发的用于显示待显示图像的显示指令时,识别所述待显示图像包括的场景信息,其中,所述场景信息是在所述摄像头拍摄所述待显示图像时写入所述待显示图像的;根据识别出的所述场景信息,对所述待显示图像进行显示优化;显示经过显示优化后的所述待显示图像。
本申请实施例中可涉及的场景,可以包括夜景、逆光、蓝天、绿色植物、美食、人像、阴天、草地、城市等等。
其中,显示优化可以包括如下操作中的一个或多个组合:颜色显示优化,对比度显示优化和清晰度显示优化。
本申请实施例中,为了提高显示效果,在通过摄像头拍摄图像时,加入图像分析环节来分析场景信息(如夜景、逆光),然后将场景信息写入图像,从而在显示优化时,通过获取到的图像内容信息中的包括的场景信息对图像进行显示优化,使得显示的图像就更加准确、细致,也避免了在打开图像的时候进行场景识别相关运算,提升了图像浏览的流畅程度。另外,通过本申请实施例提供的优化算法,不需要对终端设备进行任何硬件修改,即可实现场景信息的在拍摄端以及显示端的传递。
在一种可能的设计中,所述场景信息通过如下方式写入所述待显示图像:
接收到所述摄像头的开启指令时,在所述终端设备上显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面;
根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;
在接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
通过上述设计,利用拍照预览环节对场景信息进行识别,可以获取更多的传感器信息提升判别结果的准确性。
在一种可能的设计中,所述根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息,包括:
在接收到所述开启指令后,在接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
通过上述设计,利用多帧预览画面的判别结果进行融合,进一步提升场景识别算法的鲁棒性。
在一种可能的设计中,所述通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,包括:
按照所述N个场景分类器的配置顺序,针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
在一种可能的设计中,所述将得到的所述场景信息写入待显示图像,包括:
将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域 的厂商注释MakerNote字段中。
在不增加拍摄端(摄像头)以及显示端(显示器)的耦合性的情况下,利用EXIF区域中的MakerNote字段存储场景信息,拉通拍摄端与显示端的场景信息传递通路,不需要更改或增加硬件即可实现通过场景信息进行显示优化。
在一种可能的设计中,基于写入所述待显示图像的场景信息对所述待显示图像进行显示优化,包括:
获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
第二方面,本申请实施例还一种图像显示优化装置,所述装置应用于包括摄像头的终端设备,包括:
接收模块,用于接收用户触发的用于显示待显示图像的显示指令;
场景识别模块,用于所述接收模块接收到的所述显示指令时,识别所述待显示图像包括的场景信息,所述场景信息是在通过所述摄像头拍摄所述待显示图像时写入所述待显示图像的;
显示优化模块,用于根据所述场景识别模块识别出的所述场景信息,对所述待显示图像进行显示优化;
显示模块,用于显示经过所述显示优化模块显示优化后的所述待显示图像。
在一种可能的设计中,所述显示模块,还用于在所述接收模块接收到所述摄像头的开启指令后,显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面;
所述装置还包括:
场景信息写入模块,用于通过如下方式将所述场景信息写入所述待显示图像:
根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、 激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述显示模块显示的所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;
在所述接收模块接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
在一种可能的设计中,所述场景信息写入模块,在根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息时,具体用于:
在所述接收模块接收到所述开启指令后以及在接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
在一种可能的设计中,所述场景信息写入模块,在通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及本次达到预设帧数时所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类时,具体用于:
按照所述N个场景分类器的配置顺序,针对在所述接收模块接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
在一种可能的设计中,所述场景信息写入模块,在将得到的所述场景信息写入待显示图像时,具体用于将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域的厂商注释MakerNote字段中。
在一种可能的设计中,所述显示优化模块,具体用于:
获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
第三方面,本申请实施例还提供了一种终端,包括:
摄像头,用于拍摄待显示图像;
处理器,用于接收到用户触发的用于显示所述待显示图像的显示指令时,识别所述待显示图像包括的场景信息,其中,所述场景信息是在所述摄像头拍摄所述待显示图像时写入所述待显示图像的;根据识别出的所述场景信息,对所述待显示图像进行显示优化;
显示器,用于显示经过显示优化后的所述待显示图像。
处理器可以包括一个或多个通用处理器。显示器可以采用液晶显示器(英文:Liquid Crystal Display,简称:LCD)或OLED(英文:Organic Light-Emitting Diode,简称:有机发光二极管)等。
在一种可能的设计中,所述处理器,还用于在接收所述摄像头的开启指令时,指示所述显示器显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面;
所述显示器,还用于显示所述预览画面;
所述处理器,还用于根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;在接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
在一种可能的设计中,所述处理器,在根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息时,具体用于:
在接收到所述开启指令后,在接收到所述拍摄指令之前,通过预定义的N 个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
在一种可能的设计中,所述处理器,在通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类时,具体用于:
按照所述N个场景分类器的配置顺序,针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
在一种可能的设计中,所述处理器,在将得到的所述场景信息写入待显示图像时,具体用于:
将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域的厂商注释MakerNote字段中。
在一种可能的设计中,所述处理器,在基于写入所述待显示图像的场景信息对所述待显示图像进行显示优化时,具体用于:
获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
附图说明
图1为本申请实施例提供的一种终端设备示意图;
图2为本申请实施例提供的一种图像显示优化方法流程图;
图3为本申请实施例提供的一种场景信息写入方法流程图;
图4为本申请实施例提供的Vin、Vout的调整曲线示意图;
图5A为本申请实施例提供的场景信息写入图像的方法流程图;
图5B为本申请实施例提供的场景信息写入图像的方法示意图;
图6A为本申请实施例提供的显示优化方法流程图;
图6B为本申请实施例提供的显示优化方法示意图;
图7A本申请实施例提供的显示优化装置示意图;
图7B为本申请实施例提供的显示优化装置实现显示优化原理示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
本申请实施例提供一种图像显示优化方法及装置,用以解决现有技术中存在的在增加场景信息分析的情况下,图像显示耗时大的问题。其中,方法和装置是基于同一发明构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
本申请实施例的图像显示优化方案可使用各种具有拍照功能且能够用于显示的电子设备进行实施,该电子设备包括但不限于个人计算机、服务器计算机、手持式或膝上型设备、移动设备(比如移动电话、平板电脑、个人数字助理、媒体播放器等等)、消费型电子设备、小型计算机、大型计算机,等等。但该电子设备优选为智能移动终端,下面以智能移动终端为例对本申请实施例提供的方案进行具体描述。
参考图1所示,为本申请实施例应用的终端的硬件结构示意图。如图1 所示,终端100包括显示设备110、处理器120以及存储器130。存储器130可用于存储软件程序以及数据,处理器120通过运行存储在存储器130的软件程序以及数据,从而执行终端100的各种功能应用以及数据处理。存储器130可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如图像显示优化功能、场景分类器功能等)等;存储数据区可存储根据终端100的使用所创建的数据(比如音频数据、电话本、可交换图像文件EXIF等)等。此外,存储器130可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。处理器120是终端100的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器130内的软件程序和/或数据,执行终端100的各种功能和处理数据,从而对终端进行整体监控。处理器120可以包括一个或多个通用处理器,还可包括一个或多个DSP(Digital Signal Processor,数字信号处理器),用于执行相关操作,以实现本申请实施例所提供的技术方案。
终端100还可以包括输入设备140,用于接收输入的数字信息、字符信息或接触式触摸操作/非接触式手势,以及产生与终端100的用户设置以及功能控制有关的信号输入等。具体地,本申请实施例中,该输入设备140可以包括触控面板141。触控面板141,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板141上或在触控面板141的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板141可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器120,并能接收处理器120发来的命令并加以执行。例如,用户在触控面板141上用手指单击一张图片,触摸检测装置检测到此次单击带来的这个信号,然后将该信号传送给触摸控制器,触摸控制器再将这个信号转换成坐标发送给处理器120,处理器120根据该坐标和该信号的类 型(单击或双击)确定对该图片所执行的操作(如图片放大、图片全屏显示),然后,确定执行该操作所需要占用的内存空间,若需要占用的内存空间小于空闲内存,则将该应用启动后的界面全屏显示在显示设备包括的显示面板111上,从而实现应用启动。
触控面板141可以采用电阻式、电容式、红外线以及表面声波等多种类型实现。除了触控面板141,输入设备140还可以包括其他输入设备142,其他输入设备142可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示设备110,包括的显示面板111,用于显示由用户输入的信息或提供给用户的信息以及终端设备100的各种菜单界面等,在本申请实施例中主要用于显示终端100中图像。可选的,显示面板可以采用液晶显示器(英文:Liquid Crystal Display,简称:LCD)或OLED(英文:Organic Light-Emitting Diode,简称:有机发光二极管)等形式来配置显示面板111。在其他一些实施例中,触控面板141可覆盖显示面板111上,形成触摸显示屏。
除以上之外,终端100还可以包括用于给其他模块供电的电源150以及用于拍摄照片或视频的摄像头160。终端100还可以包括一个或多个传感器170,例如加速度传感器、光线传感器、GPS传感器、红外传感器、激光传感器、位置传感器或镜头指向角度传感器等。终端100还可以包括无线射频(Radio Frequency,RF)电路180,用于与无线网络设备进行网络通信,还可以包括WiFi模块190,用于与其他设备进行WiFi通信。
本申请实施例提供的图像显示优化方法可以实现在图1所示的存储软件程序中,具体可以由终端设备100的处理器120来执行,该终端设备可以包括摄像头。具体的,如图2所示,本申请实施例提供的图像显示优化方法,包括:
S210,接收到用户触发的用于显示待显示图像的显示指令时,识别所述待显示图像包括的场景信息。其中,所述场景信息是在通过摄像头拍摄所述待显示图像时写入所述待显示图像的。
本申请实施例中可涉及的场景,可以包括夜景、逆光、蓝天、绿色植物、美食、人像、阴天、草地、城市等等。
本申请实施例中的显示指令可以是用户点击缩略图触发的指令,或者用户查看图像时的左右滑动手势触发的指令,还可以是用户查看图像时的上下滑动手势触发的指令,或者是用户点击图像标识触发的指令,或者是用户触发打印的打印预览指令,或者是用户触发图像分享的图像分享预览指令,或者屏幕保护图像的显示指令,等等。图像标识可以是图像名称或者图像ID,等等。
S220,根据识别出的所述场景信息,对所述待显示图像进行显示优化。
其中,显示优化可以包括如下操作中的一个或多个组合:颜色显示优化,对比度显示优化和清晰度显示优化。
S230,显示经过显示优化后的所述待显示图像。
本申请实施例中,为了提高显示效果,在通过摄像头拍摄图像时,加入图像分析环节来分析场景信息(如夜景、逆光),然后将场景信息写入图像,从而在显示优化时,通过获取到的图像内容信息中的包括的场景信息对图像进行显示优化,使得显示的图像就更加准确、细致,也避免了在打开图像的时候进行场景识别相关运算,提升了图像浏览的流畅程度。另外,通过本申请实施例提供的优化算法,不需要对终端设备进行任何硬件修改,即可实现场景信息的在拍摄端以及显示端的传递。
可选的,本申请实施例涉及的所述场景信息可以通过如下方式写入所述待显示图像,如图3所示:
S310,在接收到所述摄像头的开启指令时,在所述终端设备上显示预览画面。
其中,所述预览画面为通过所述摄像头采集到的画面。
S320,根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息。
S330,在接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
需要说明的是,在摄像头开启以后,以及在用户触发拍摄指令之前的这段时间,在预览画面内会显示多帧图像,因此可以针对预览画面内的多帧图像所属的场景分类进行分类,从而得到场景信息。在用户触发拍摄指令时,将上述这段时间内的最后几帧图像中拍摄清晰度最高的作为拍摄的图像,即待显示图像,然后将之前在预览画面阶段识别的场景信息写入拍摄得到的图像。
可选地,在将所述场景信息写入待显示图像时,可以将所述场景信息写入所述图像的可交换图像文件(英文:Exchangeable Image File,简称:EXIF)EXIF数据区域的厂商注释(MakerNote)字段中。当然场景信息也可以写入在图像的其它数据区域,或者EXIF数据区域的其它字段中。待显示图像的格式可以是JEPG格式或者标签图像文件格式(英文:Tag Image File Format,简称:TIFF)等。
需要说明的是,场景信息写入的可交换图像文件EXIF数据区域是专门为数码拍照设备指定的,用以记录数码照片的属性信息和拍摄数据的区域。EXIF标准由JEIDA组织制定并被各大数码拍照设备场景广泛采用。EXIF数据区域中存储了大量数据,各条数据以条目的形式存储在数据区域中,并规定好了各自的功能和ID。Maker Note字段在EXIF标准中被规定用于记录设备厂商的设备相关信息,其ID为0x927c。本申请实施例中,将场景信息记录在Maker Note字段中,不需要对EXIF数据区域进行格式上的更改。
可选地,所述根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息,可以通过如下方式实现:
A1,在接收到所述开启指令后,在接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场 景分类器输出的场景类型。其中,N为不小于2的正整数。
具体的,终端设备中设置有多个传感器,比如光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器、镜头指向角度传感器等,通过多个传感器采集到的参数以及当前预览画面中图像的内容,输入到场景分类器中输出分类结果得到场景类型。
光线传感器用于来测量终端设备所处环境的光线。比如可以通过光线传感器来区分高光场景以及低光场景。GPS传感器用于测量终端设备所处的位置。红外传感器通过接收到红外线的强度,测定距离。比如,GPS传感器可以用来区分包括建筑物的场景或以及不包括建筑物的场景;GPS传感器还可以用来区分海景与雪景等等。气压传感器用于测量大气压强,气压传感器可以用来区分高原场景以及非高原场景。气压传感器或者磁力传感器或者GPS传感器或者光线传感器或者激光传感器还可以用来区分室内场景或者室外场景。镜头指向角度传感器用于测量镜头与水平方向的角度。镜头指向角度传感器可以用来区分自拍场景以及非自拍场景,镜头指向角度传感器还可以用来判别蓝天场景与绿色植物场景等等。
A2,将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
具体的,在通过预定义的N个场景分类器,针对所述预览画面内的图像所属的场景进行分类时,可以通过如下方式实现:
第一种可能的实现方式:
按照所述N个场景分类器的配置顺序,针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
第二种可能的实现方式:
针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,分别采用所述N个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
对比第一种可能的实现方式以及第二种可能的实现方式,较优的为第一种可能的实现方式,第一种可能的实现方式既不需要每帧图像都进行场景识别,并且每隔预设帧数时,依次采用一种场景分类器进行场景分类识别,从而节省了时间,减少的资源的使用。
在一种可能的实现方式中,在基于写入所述待显示图像的场景信息对所述待显示图像进行显示优化时,具体可以通过如下方式实现:
B1,获取预配置的查找表(英文:Look-Up-Table,简称:LUT)中所述待显示图像的场景信息对应的优化参数;
B2,基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
本申请实施例中,显示优化可以包括如下操作中的一个或多个组合:颜色显示优化,对比度显示优化和清晰度显示优化。因此,颜色显示优化,对比度显示优化和清晰度显示优化分别对应一个LUT表。
由于颜色显示优化、对比度显示优化和清晰度显示优化具有类似的工作原理,所以本申请实施例仅以对比度显示优化为例进行说明,其它两种不再赘述。
对比度显示优化对应的查找表如表1所示,表1中Z1以及Z2分别表示优化参数,Znight,1表示夜景场景下的Z1值,Znight,2表示夜景场景下的Z2值;Zbackllight,1表示逆光场景下的Z1值,Zbackllight,2表示逆光场景下的Z2值;Zsky,1表示蓝天场景下的Z1值,Zsky,2表示蓝天场景下的Z2值;Zfoliage,1表示逆光场景下的Z1值,Zbackllight,2表示逆光场景下的Zfoliage,2值。
表1
  Z1 Z2
夜景场景 Znight,1 Znight,2
逆光场景 Zbackllight,1 Zbackllight,2
蓝天场景 Zsky,1 Zsky,2
绿色植物场景 Zfoliage,1 Zfoliage,2
下面对对比度显示优化对应的LUT表生成原理进行说明:
对比度显示优化在本质上是将图像在色度(H)饱和度(S)亮度(V)色彩空间中的V分量进行分段伽马(Gamma)校正,以达到对比度增强的目的。其数学模型可归纳为:
Figure PCTCN2016108730-appb-000001
其中,Vin表示输入的图像的像素值、Vout表示输出的图像的像素值;γ表示Gamma校正因子;a表示常数因子。
Vin、Vout的调整曲线如图4所示,在亮度低的区域Vin<Z1,取γ>1,图像中的暗部区域的像素值被抑制,在亮度比较高的区域Vin>Z2,取γ<1,图像中亮部区域的像素值被提升。调整后的图像在保证亮度高的区域和亮度低的区域都具有较大的动态范围。
需要说明的是,本申请实施例中Z1和Z2的选取需要根据图像的场景信息进行调节。夜景图像有大量暗像素点,所以Z1应该选取一个较低的值,避免压低暗部像素点的灰度值而造成的暗部细节丢失;同时Z2取值稍低但是大于Z1,适当增加亮部动态范围。逆光图像拍照主体区域具有部分暗像素点,同时背景区域集中了大量的亮像素点,因此针对逆光场景,应该选取较高Z1值以提升拍照主体部分的细节,同时选取较高的Z2值避免亮部区域过曝。针对不同场景信息对应的图像,调整出不同的参数,最终形成如表1所示的LUT表。
下面以场景类型包括夜景场景和逆光场景为例,如图5A及图5B所示为 拍摄端将场景信息写入图像的方法流程示意图:
S510,用户触发开启摄像头,从而终端设备接收到用户触发产生的开启指令后,启动摄像头。
S520,终端设备从显示预览画面开始直到接收到用户触发拍摄指令之前,终端设备启动的摄像头实时获取用户即将拍摄的图像。
S530,在预览画面过程中,终端设备每隔预设帧数依次采用夜景分类器以及逆光分类器对预览画面内的图像所属的场景进行分类。
具体的,将根据光线传感器、GPS传感器、红外传感器、磁力传感器、激光传感器、气压传感器或镜头指向角度传感器中的至少一种传感器获取的信息以及预览画面中的图像信息输入夜景分类器或者逆光分类器。
比如预设帧数为4帧。在显示界面呈现预览画面过程中总共出现了50帧图像,因此达到第8M-7帧时,采用夜景分类器对第8M-7帧图像所属的场景进行分类,在达到第8M-3帧时,采用逆光分类器对第8M-3帧图像所属的场景进行分类,其中M={1,2,3,…,7},如图5B所示。
S540,终端设备在接收到用户触发的拍摄指令时,拍摄待显示图像,并统计判别为夜景场景和逆光场景的次数,选择次数最多的场景类型对应的场景信息作为用户触发的拍摄指令待显示图像的场景信息。
S550,终端设备将待显示图像写入图像的数据区域,并将所述场景信息写入EXIF数据区域的Maker Note字段中。
如图6A及图6B所示为显示优化的方法流程示意图:
S610,接收到用户触发的用于显示图像的显示指令。比如用户在图库宫格界面点击图像缩略图触发的显示指令。
S620,对用户点击的图像文件进行场景解析,解析出存储在Maker Note中的场景信息。
S630,根据解析出的场景信息对用户点击的图像进行颜色显示优化、对比度显示优化、清晰度显示优化得到优化参数;如图6B所示。
S640,根据优化参数调整显示屏的显示参数并显示用户点击的图像。
基于与方法实施例同样的发明构思,本申请实施例还提供了一种图像显示优化装置,所述装置可以应用于终端设备,具体的所述装置可以设置在终端设备内,还可以由终端设备实现。如图7A及7B所示,所述装置可以包括显示端以及拍摄端。其中,显示端包括接收模块710,场景识别模块720、显示优化模块730以及显示模块740;拍摄端可以场景信息写入模块750,还可以包括摄像头,当然摄像头可以是外置设备。
接收模块710,用于接收用户触发的用于显示待显示图像的显示指令;
场景识别模块720,用于识别所述待显示图像包括的场景信息,所述场景信息是在通过摄像头拍摄所述待显示图像时写入所述待显示图像的。
显示优化模块730,用于根据所述场景识别模块720识别出的所述待显示图像的场景信息,对所述待显示图像进行显示优化。
显示模块740,用于显示经过所述显示优化模块730显示优化后的所述待显示图像。
可选地,显示优化模块730可以包括颜色显示优化模块731、对比度显示优化模块732以及清晰度显示优化模块733。颜色显示优化模块731用于对待显示图像的颜色进行显示优化,对比度显示优化模块732用于对待显示图像的对比度进行显示优化,清晰度显示优化模块733用于对待显示图像的清晰度进行显示优化。
可选地,所述显示模块740,还用于在所述接收模块710接收到所述摄像头的开启指令后显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面。
所述装置还可以包括:场景信息写入模块750,用于通过如下方式将所述场景信息写入所述待显示图像:
根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述显示模块显示的所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;
在所述接收模块接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
可选地,所述场景信息写入模块750,在根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息时,具体用于:
在所述接收模块接收到所述开启指令后以及接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
可选地,所述场景信息写入模块750,在根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息时,具体用于:
按照所述N个场景分类器的配置顺序,针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时所述预览画面中的图像信息采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
可选地,所述场景信息写入模块750,在将得到的所述场景信息写入待显示图像时,具体用于将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域的厂商注释MakerNote字段中。
可选地,所述显示优化模块740,具体用于:
获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
具体的,场景信息输入到颜色显示优化模块731后得到颜色参数,场景信息输入到对比度显示优化模块732后得到对比度参数,场景信息输入到清晰度显示优化模块733得到清晰度参数,如图7B所示。
需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,一种终端设备等)或处理器(例如图1所示的处理器120)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通 过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (18)

  1. 一种图像显示优化方法,其特征在于,所述方法应用于包括摄像头的终端设备,包括:
    接收到用户触发的用于显示待显示图像的显示指令时,识别所述待显示图像包括的场景信息,其中,所述场景信息是在所述摄像头拍摄所述待显示图像时写入所述待显示图像的;
    根据识别出的所述场景信息,对所述待显示图像进行显示优化;
    显示经过显示优化后的所述待显示图像。
  2. 如权利要求1所述的方法,其特征在于,所述场景信息通过如下方式写入所述待显示图像:
    接收到所述摄像头的开启指令时,在所述终端设备上显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面;
    根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;
    在接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息,包括:
    在接收到所述开启指令后,在接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
    将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
  4. 如权利要求3所述的方法,其特征在于,所述通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,包括:
    按照所述N个场景分类器的配置顺序,针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
    在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
  5. 如权利要求2至4任一项所述的方法,其特征在于,所述将得到的所述场景信息写入待显示图像,包括:
    将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域的厂商注释MakerNote字段中。
  6. 如权利要求1至5任一项所述的方法,其特征在于,基于写入所述待显示图像的场景信息对所述待显示图像进行显示优化,包括:
    获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
    基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
  7. 一种图像显示优化装置,其特征在于,所述装置应用于包括摄像头的终端设备,包括:
    接收模块,用于接收用户触发的用于显示待显示图像的显示指令;
    场景识别模块,用于所述接收模块接收到的所述显示指令时,识别所述待显示图像包括的场景信息,所述场景信息是在通过所述摄像头拍摄所述待显示图像时写入所述待显示图像的;
    显示优化模块,用于根据所述场景识别模块识别出的所述场景信息,对所述待显示图像进行显示优化;
    显示模块,用于显示经过所述显示优化模块显示优化后的所述待显示图 像。
  8. 如权利要求7所述的装置,其特征在于,所述显示模块,还用于在所述接收模块接收到所述摄像头的开启指令后,显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面;
    所述装置还包括:
    场景信息写入模块,用于通过如下方式将所述场景信息写入所述待显示图像:
    根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感器获取的信息,和所述显示模块显示的所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;
    在所述接收模块接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
  9. 如权利要求8所述的装置,其特征在于,所述场景信息写入模块,在根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息时,具体用于:
    在所述接收模块接收到所述开启指令后以及在接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
    将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
  10. 如权利要求9所述的装置,其特征在于,所述场景信息写入模块,在通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及本次达到预设帧数时所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类时,具体用于:
    按照所述N个场景分类器的配置顺序,针对在所述接收模块接收到开启 指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
    在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
  11. 如权利要求8至10任一项所述的装置,其特征在于,所述场景信息写入模块,在将得到的所述场景信息写入待显示图像时,具体用于将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域的厂商注释MakerNote字段中。
  12. 如权利要求7至11任一项所述的装置,其特征在于,所述显示优化模块,具体用于:
    获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
    基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
  13. 一种终端,其特征在于,包括:
    摄像头,用于拍摄待显示图像;
    处理器,用于接收到用户触发的用于显示所述待显示图像的显示指令时,识别所述待显示图像包括的场景信息,其中,所述场景信息是在所述摄像头拍摄所述待显示图像时写入所述待显示图像的;根据识别出的所述场景信息,对所述待显示图像进行显示优化;
    显示器,用于显示经过显示优化后的所述待显示图像。
  14. 如权利要求13所述的终端,其特征在于,所述处理器,还用于在接收所述摄像头的开启指令时,指示所述显示器显示预览画面,其中所述预览画面为通过所述摄像头采集到的画面;
    所述显示器,还用于显示所述预览画面;
    所述处理器,还用于根据光线传感器、GPS传感器、红外传感器、磁力传感器、气压传感器、激光传感器或镜头指向角度传感器中的至少一种传感 器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息;在接收到图像拍摄指令时,拍摄所述待显示图像,并将得到的所述场景信息写入所述待显示图像。
  15. 如权利要求14所述的终端,其特征在于,所述处理器,在根据所述至少一种传感器获取的信息,和所述预览画面中的图像信息,确定所述预览画面中的图像所属的场景信息时,具体用于:
    在接收到所述开启指令后,在接收到所述拍摄指令之前,通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类,获取各场景分类器输出的场景类型;其中,N为不小于2的正整数;
    将输出次数最多的场景类型对应的场景信息确定为所述待显示图像的场景信息。
  16. 如权利要求15所述的终端,其特征在于,所述处理器,在通过预定义的N个场景分类器,根据所述至少一种传感器获取的信息以及所述预览画面中的图像信息针对所述预览画面内的图像所属的场景进行分类时,具体用于:
    按照所述N个场景分类器的配置顺序,针对在接收到开启指令后,所述预览画面内的每隔预设帧数的图像执行如下操作:
    在本次达到预设帧数时,根据所述至少一种传感器获取的信息以及本次达到预设帧数时预览画面中的图像信息,采用基于所述配置顺序选择的一个场景分类器针对本次达到预设帧数时所述预览画面内的图像所属的场景进行分类。
  17. 如权利要求14至16任一项所述的终端,其特征在于,所述处理器,在将得到的所述场景信息写入待显示图像时,具体用于:
    将得到的所述场景信息写入所述图像的可交换图像文件EXIF数据区域的厂商注释MakerNote字段中。
  18. 如权利要求13至17任一项所述的终端,其特征在于,所述处理器, 在基于写入所述待显示图像的场景信息对所述待显示图像进行显示优化时,具体用于:
    获取预配置的查找表中所述待显示图像的场景信息对应的优化参数;
    基于所述优化参数调整所述终端设备的显示屏的显示参数并显示所述待显示图像。
PCT/CN2016/108730 2016-10-17 2016-12-06 一种图像显示优化方法及装置 WO2018072271A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680080589.8A CN108701439B (zh) 2016-10-17 2016-12-06 一种图像显示优化方法及装置
US16/342,451 US10847073B2 (en) 2016-10-17 2016-12-06 Image display optimization method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610905784.1 2016-10-17
CN201610905784 2016-10-17

Publications (1)

Publication Number Publication Date
WO2018072271A1 true WO2018072271A1 (zh) 2018-04-26

Family

ID=62018065

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108730 WO2018072271A1 (zh) 2016-10-17 2016-12-06 一种图像显示优化方法及装置

Country Status (3)

Country Link
US (1) US10847073B2 (zh)
CN (1) CN108701439B (zh)
WO (1) WO2018072271A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110149517A (zh) * 2018-05-14 2019-08-20 腾讯科技(深圳)有限公司 视频处理的方法、装置、电子设备及计算机存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109981990B (zh) * 2019-04-10 2023-07-28 深圳传音通讯有限公司 一种图像处理方法、装置及终端
EP4036900A4 (en) * 2019-09-25 2022-09-07 BOE Technology Group Co., Ltd. SHIFT REGISTER UNIT, DRIVE METHOD, GATE DRIVE CIRCUIT AND DISPLAY DEVICE
CN113542711B (zh) * 2020-04-14 2024-08-27 青岛海信移动通信技术有限公司 一种图像显示方法和终端
CN111629229A (zh) * 2020-05-19 2020-09-04 深圳Tcl新技术有限公司 图片展示方法、终端设备及介质
CN111897609A (zh) * 2020-07-14 2020-11-06 福建捷联电子有限公司 一种显示器自动调整最佳画质显示摄影图片的方法
CN111866384B (zh) * 2020-07-16 2022-02-01 深圳传音控股股份有限公司 拍摄控制方法、移动终端及计算机存储介质
CN112468722B (zh) * 2020-11-19 2022-05-06 惠州Tcl移动通信有限公司 一种拍摄方法、装置、设备及存储介质
CN113115085A (zh) * 2021-04-16 2021-07-13 海信电子科技(武汉)有限公司 一种视频播放方法及显示设备
CN113329173A (zh) * 2021-05-19 2021-08-31 Tcl通讯(宁波)有限公司 一种影像优化方法、装置、存储介质及终端设备
CN115633250A (zh) * 2021-07-31 2023-01-20 荣耀终端有限公司 一种图像处理方法及电子设备
CN117032617B (zh) * 2023-10-07 2024-02-02 启迪数字科技(深圳)有限公司 基于多屏幕的网格拾取方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200304754A (en) * 2002-03-19 2003-10-01 Hewlett Packard Co Digital camera and method for balancing color in a digital image
CN101789233A (zh) * 2010-01-14 2010-07-28 宇龙计算机通信科技(深圳)有限公司 移动终端显示图像的方法及移动终端
CN102954836A (zh) * 2012-10-26 2013-03-06 京东方科技集团股份有限公司 环境光传感器、用户使用装置及显示装置
CN104820537A (zh) * 2015-04-22 2015-08-05 广东欧珀移动通信有限公司 一种调整终端显示效果的方法及装置
CN104932849A (zh) * 2014-03-21 2015-09-23 海信集团有限公司 一种设置应用场景的方法、设备和系统
CN103391363B (zh) * 2013-07-11 2015-10-28 广东欧珀移动通信有限公司 一种拍照预览显示方法、装置及移动终端

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5270831A (en) * 1990-09-14 1993-12-14 Eastman Kodak Company Storage and playback of digitized images in digital database together with presentation control file to define image orientation/aspect ratio
WO1996024216A1 (en) 1995-01-31 1996-08-08 Transcenic, Inc. Spatial referenced photography
JPH08336069A (ja) * 1995-04-13 1996-12-17 Eastman Kodak Co 電子スチルカメラ
EP1071285A1 (en) * 1999-07-19 2001-01-24 Texas Instruments Inc. Vertical compensation in a moving camera
US7133070B2 (en) * 2001-09-20 2006-11-07 Eastman Kodak Company System and method for deciding when to correct image-specific defects based on camera, scene, display and demographic data
KR101155526B1 (ko) 2005-06-15 2012-06-19 삼성전자주식회사 찾아가기 기능을 가진 디지털 영상 처리 장치의 제어 방법
JP2008271249A (ja) 2007-04-20 2008-11-06 Seiko Epson Corp 情報処理方法、情報処理装置及びプログラム
KR101542436B1 (ko) 2008-07-29 2015-08-06 후지필름 가부시키가이샤 촬상 장치 및 촬상 방법
JP2011055476A (ja) 2009-08-06 2011-03-17 Canon Inc 表示装置
JP5577415B2 (ja) * 2010-02-22 2014-08-20 ドルビー ラボラトリーズ ライセンシング コーポレイション ビットストリームに埋め込まれたメタデータを用いたレンダリング制御を備えるビデオ表示
JP2011193066A (ja) 2010-03-12 2011-09-29 Sanyo Electric Co Ltd 撮像装置
US9883116B2 (en) * 2010-12-02 2018-01-30 Bby Solutions, Inc. Video rotation system and method
KR20120078980A (ko) 2011-01-03 2012-07-11 삼성전자주식회사 휴대단말기에서 영상의 방향정보 추출장치 및 방법
WO2015017314A1 (en) * 2013-07-30 2015-02-05 Dolby Laboratories Licensing Corporation System and methods for generating scene stabilized metadata
CN103533244A (zh) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 拍摄装置及其自动视效处理拍摄方法
CN104159036B (zh) 2014-08-26 2018-09-18 惠州Tcl移动通信有限公司 一种图像方向信息的显示方法及拍摄设备
CN105450923A (zh) 2014-09-25 2016-03-30 索尼公司 图像处理方法、图像处理装置以及电子设备
EP3739564A1 (en) * 2015-08-31 2020-11-18 LG Electronics Inc. Image display apparatus
US10516810B2 (en) * 2016-03-07 2019-12-24 Novatek Microelectronics Corp. Method of gamut mapping and related image conversion system
US11030728B2 (en) * 2018-05-29 2021-06-08 Apple Inc. Tone mapping techniques for increased dynamic range

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200304754A (en) * 2002-03-19 2003-10-01 Hewlett Packard Co Digital camera and method for balancing color in a digital image
CN101789233A (zh) * 2010-01-14 2010-07-28 宇龙计算机通信科技(深圳)有限公司 移动终端显示图像的方法及移动终端
CN102954836A (zh) * 2012-10-26 2013-03-06 京东方科技集团股份有限公司 环境光传感器、用户使用装置及显示装置
CN103391363B (zh) * 2013-07-11 2015-10-28 广东欧珀移动通信有限公司 一种拍照预览显示方法、装置及移动终端
CN104932849A (zh) * 2014-03-21 2015-09-23 海信集团有限公司 一种设置应用场景的方法、设备和系统
CN104820537A (zh) * 2015-04-22 2015-08-05 广东欧珀移动通信有限公司 一种调整终端显示效果的方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110149517A (zh) * 2018-05-14 2019-08-20 腾讯科技(深圳)有限公司 视频处理的方法、装置、电子设备及计算机存储介质

Also Published As

Publication number Publication date
US10847073B2 (en) 2020-11-24
CN108701439A (zh) 2018-10-23
US20200051477A1 (en) 2020-02-13
CN108701439B (zh) 2021-02-12

Similar Documents

Publication Publication Date Title
WO2018072271A1 (zh) 一种图像显示优化方法及装置
CN107197169B (zh) 一种高动态范围图像拍摄方法及移动终端
US9686475B2 (en) Integrated light sensor for dynamic exposure adjustment
WO2019109801A1 (zh) 拍摄参数的调整方法、装置、存储介质及移动终端
CN112954210B (zh) 拍照方法、装置、电子设备及介质
US9185286B2 (en) Combining effective images in electronic device having a plurality of cameras
KR102124604B1 (ko) 이미지 안정화 방법 그 전자 장치
CN111654635A (zh) 拍摄参数调节方法、装置及电子设备
US11158057B2 (en) Device, method, and graphical user interface for processing document
US20230245441A9 (en) Image detection method and apparatus, and electronic device
WO2020215861A1 (zh) 图片显示方法、图片显示装置、电子设备及存储介质
WO2019105457A1 (zh) 图像处理方法、计算机设备和计算机可读存储介质
US20150049948A1 (en) Mobile document capture assist for optimized text recognition
WO2018184260A1 (zh) 文档图像的校正方法及装置
WO2018171047A1 (zh) 一种拍摄引导方法、设备及系统
CN107330859B (zh) 一种图像处理方法、装置、存储介质及终端
US20130279811A1 (en) Method and system for automatically selecting representative thumbnail of photo folder
CN110290426B (zh) 展示资源的方法、装置、设备及存储介质
JP2017509090A (ja) 画像分類方法及び装置
CN105827963A (zh) 一种拍照过程中场景变化检测方法及移动终端
US10212363B2 (en) Picture processing method and electronic device
US11615513B2 (en) Control display method and electronic device
WO2016044983A1 (zh) 一种图像处理方法、装置及电子设备
US11706522B2 (en) Imaging system, server, imaging device, imaging method, program, and recording medium having a function of assisting capturing of an image coincident with preference of a user
WO2022127738A1 (zh) 一种图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16919037

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16919037

Country of ref document: EP

Kind code of ref document: A1