CN106851119B - Picture generation method and equipment and mobile terminal - Google Patents

Picture generation method and equipment and mobile terminal Download PDF

Info

Publication number
CN106851119B
CN106851119B CN201710217678.9A CN201710217678A CN106851119B CN 106851119 B CN106851119 B CN 106851119B CN 201710217678 A CN201710217678 A CN 201710217678A CN 106851119 B CN106851119 B CN 106851119B
Authority
CN
China
Prior art keywords
image
mobile terminal
brightness value
camera
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710217678.9A
Other languages
Chinese (zh)
Other versions
CN106851119A (en
Inventor
巫吉辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qiku Internet Technology Shenzhen Co Ltd
Original Assignee
Qiku Internet Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiku Internet Technology Shenzhen Co Ltd filed Critical Qiku Internet Technology Shenzhen Co Ltd
Priority to CN201710217678.9A priority Critical patent/CN106851119B/en
Publication of CN106851119A publication Critical patent/CN106851119A/en
Application granted granted Critical
Publication of CN106851119B publication Critical patent/CN106851119B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Abstract

The invention discloses a method and equipment for generating pictures and a mobile terminal, wherein the method comprises the following steps: acquiring the brightness value of light of the environment where the mobile terminal is located; if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour; superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image; and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture. Therefore, by determining the area to be enhanced and enhancing the area, the data processing amount is reduced, and the effect of improving the quality of the shot picture under the condition of poor light is realized.

Description

Picture generation method and equipment and mobile terminal
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for generating a picture, and a mobile terminal.
Background
Along with the development of smart phones, more and more people like to take pictures with mobile phones, but at present, the mobile phones have good shooting effects under the condition of good light in the daytime, but the shooting effects are very poor under the condition of poor light.
At present, in order to solve the defects, a plurality of manufacturers adopt better photosensitive elements, increase apertures and the like to realize better shooting effects by adopting a mode of better hardware, but the mode cannot solve equipment with fixed hardware, and the better hardware brings larger volume and higher cost, which is very not favorable for the trend of light and thin of smart phones and the like, and the cost is high, so that the existing technical problems cannot be effectively solved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method and equipment for generating a picture and a mobile terminal, which are used for overcoming the defects in the prior art and realizing a better picture effect.
The invention proposes the following specific embodiments:
the embodiment of the invention provides a method for generating pictures, which comprises the following steps:
acquiring the brightness value of light of the environment where the mobile terminal is located;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
In a specific embodiment, the obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
In a specific embodiment, the camera comprises a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
In a specific embodiment, the method further comprises:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
In a particular embodiment, the image contour comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
In a specific embodiment, the "recognizing the external shape of each scene in the first image to obtain the outer edge boundary of the external shape of each scene in the first image" includes:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
In a specific embodiment, the controlling the activation of the low-light night vision device to acquire the first image includes:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
In a specific embodiment, the second image acquired by the camera of the mobile terminal by normal shooting is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
In a specific embodiment, the "superimposing the first image with a second image obtained by normal shooting by the camera, and determining a region to be enhanced in the second image according to the image contour" includes:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
The embodiment of the present invention further provides an apparatus for generating an image, including:
the first acquisition module is used for acquiring the brightness value of the light of the environment where the mobile terminal is located;
the second acquisition module is used for acquiring a first image containing an image outline when the brightness value is smaller than a preset threshold value;
the determining module is used for overlapping the first image with a second image acquired by a camera of the mobile terminal through normal shooting, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and the enhancement module is used for carrying out pixel enhancement processing on the region to be enhanced so as to obtain a clear enhanced new picture.
In a specific embodiment, the first obtaining module is configured to:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
In a specific embodiment, the camera includes: the camera comprises a first camera and/or a second camera;
the first obtaining module is configured to:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
In a specific embodiment, the apparatus further comprises: a correction module to: acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
In a particular embodiment, the image contour comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the second acquiring module "acquiring the first image containing the image contour" includes:
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
In a specific embodiment, the second acquiring module "identifying the outline of each scene in the first image to acquire the outer edge boundary of the outline of each scene in the first image" includes:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
In a specific embodiment, the second acquiring module "activating the low-light night vision device to acquire the first image" includes:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
In a specific embodiment, the second image acquired by the camera in normal shooting is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
In a specific embodiment, the determining module is configured to:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
An embodiment of the present invention further provides a mobile terminal, including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring the brightness value of light of the environment where the mobile terminal is located;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
In a specific embodiment, the obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
In a specific embodiment, the camera comprises a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
In a specific embodiment, the processor is further configured to:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
In a particular embodiment, the image contour comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
In a specific embodiment, the "recognizing the external shape of each scene in the first image to obtain the outer edge boundary of the external shape of each scene in the first image" includes:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
In a specific embodiment, the controlling the activation of the low-light night vision device to acquire the first image includes:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
In a specific embodiment, the second image acquired by the camera in normal shooting is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
In a specific embodiment, the "superimposing the first image with the second image acquired by the camera, and determining the region to be enhanced in the second image according to the image contour" includes:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
Therefore, the embodiment of the invention provides a method and equipment for generating pictures and a mobile terminal, wherein the method comprises the following steps: acquiring the brightness value of light of the environment where the mobile terminal is located; if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour; superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image; and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture. Therefore, the region to be enhanced in the image is determined through the image contour, and the pixel enhancement processing is performed on the region to be enhanced in a targeted manner, so that the definition of the image is improved, and the processing workload is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a method for generating a picture according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for generating a picture according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for generating a picture according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for generating a picture according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
Various embodiments of the present disclosure will be described more fully hereinafter. The present disclosure is capable of various embodiments and of modifications and variations therein. However, it should be understood that: there is no intention to limit the various embodiments of the disclosure to the specific embodiments disclosed herein, but rather, the disclosure is to cover all modifications, equivalents, and/or alternatives falling within the spirit and scope of the various embodiments of the disclosure.
Hereinafter, the term "includes" or "may include" used in various embodiments of the present disclosure indicates the presence of the disclosed functions, operations, or elements, and does not limit the addition of one or more functions, operations, or elements. Furthermore, as used in various embodiments of the present disclosure, the terms "comprising," "having," and their derivatives, are intended to be only representative of the particular features, integers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to one or more other features, integers, steps, operations, elements, components, or combinations of the foregoing.
In various embodiments of the disclosure, the expression "or" at least one of a or/and B "includes any or all combinations of the words listed simultaneously. For example, the expression "a or B" or "at least one of a or/and B" may include a, may include B, or may include both a and B.
Expressions (such as "first", "second", and the like) used in various embodiments of the present disclosure may modify various constituent elements in the various embodiments, but may not limit the respective constituent elements. For example, the above description does not limit the order and/or importance of the elements described. The foregoing description is for the purpose of distinguishing one element from another. For example, the first user device and the second user device indicate different user devices, although both are user devices. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of various embodiments of the present disclosure.
It should be noted that: if it is described that one constituent element is "connected" to another constituent element, the first constituent element may be directly connected to the second constituent element, and a third constituent element may be "connected" between the first constituent element and the second constituent element. In contrast, when one constituent element is "directly connected" to another constituent element, it is understood that there is no third constituent element between the first constituent element and the second constituent element.
The term "user" used in various embodiments of the present disclosure may indicate a person using an electronic device or a device using an electronic device (e.g., an artificial intelligence electronic device).
The terminology used in the various embodiments of the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the present disclosure. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of the present disclosure belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined in various embodiments of the present disclosure.
Example 1
Embodiment 1 of the present invention discloses a method for generating a picture, as shown in fig. 1, including:
step 101, obtaining the brightness value of light of the environment where the mobile terminal is located;
specifically, the step 101 of "obtaining the brightness value of the light of the environment where the mobile terminal is located" includes the following specific ways as shown in fig. 2 and fig. 3:
the method comprises the following steps of 1, acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
Specifically, the Light Sensor is also called a brightness Sensor, and the english name is Light-Sensor, which is used for sensing the intensity of Light; the mobile terminal is generally equipped with a light sensor, and the light sensor can monitor the brightness of light at the current position of the mobile terminal. The specific processing procedure is to convert the optical signal into an electrical signal and then obtain the brightness value of the light based on the electrical signal.
Mode 2, in the mode, a camera on the mobile terminal is considered to comprise a first camera and/or a second camera; therefore, it can be considered that the brightness value is obtained by the camera, and the specific process is as follows:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
Specifically, when a camera (a first camera and/or a second camera) is used for shooting, a photosensitive element inside the camera is used for photosensitive shooting areas, and further processing is embodied in the generated pre-shot images, so that the brightness value of the environment where the pre-shot images are located can be obtained through the shot images. Specifically, in order to improve the accuracy of the determination, the brightness value of each pixel in the pre-captured image generated by capturing may be counted and analyzed, so as to determine the accurate brightness value of the light of the current environment.
In order to further improve the accuracy, the method further comprises the following steps:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
Specifically, different geographic locations, such as the southern and northern hemispheres, and the longitude and latitude are different, the seasons are different, the time of each day is different, and the corresponding specific light is different, so that the brightness value is recorded for each time of each location, a historical brightness database is generated, such as the brightness value is still higher when the local time in summer on the northbound line of the northern hemispheres is 19, the historical brightness value is obtained based on the historical brightness database and the current geographic location (e.g., positioning by GPS) of the mobile terminal and the current time, and the measured brightness value is corrected according to the searched historical brightness value, such as when the historical brightness value is detected
Step 102, if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
specifically, the brightness value is smaller than the preset threshold, which indicates that the brightness value is relatively low, for example, the mobile terminal is in a night scene, in this case, the shooting effect of the mobile terminal, for example, a mobile phone, is not good, and therefore, the first image including the image contour is obtained first.
Wherein the image contour comprises an outer edge boundary of a scene in the image and a region comprised by the outer edge boundary;
the step 102 of acquiring the first image containing the image contour includes:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
In this embodiment, the first image is a night vision image captured by a low-light night vision device, which includes the outline and the corresponding position of the captured scene, and the outline is identified to determine the outer edge boundary of the outline, for example, a building is captured, and the outer edge line of the outline is a vertical straight line; the other scenes are respectively corresponding to the respective outlines and the outer boundary lines of the outlines, and then the image outline of each scene is determined based on the outer boundary lines.
Specifically, the "recognizing the external shape of each scene in the first image to obtain the outer edge boundary of the external shape of each scene in the first image" includes:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
Specifically, the outline of the scene may be identified by a processing method of adaptive spatial filtering, and in addition, the existing other processing method may be used for processing, and specifically, only the outer edge boundary of the outline of each scene needs to be obtained by the outline identification, which is not limited to this specific method.
And wherein said controlling the low-light night vision device to acquire the first image comprises:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
Specifically, in order to improve accuracy, multiple pictures can be taken at the same time through the low-light night vision device, and the multiple pictures are synthesized to generate a first image, so that more scene details can be acquired from the synthesized image, and a more accurate image profile can be acquired.
Step 103, overlapping the first image with a second image obtained by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
specifically, the second image may be a picture normally taken by a camera, and in order to further improve the definition of the image, in an embodiment, the second image obtained by the camera of the mobile terminal by normally taking the picture may be further generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
Therefore, the second image synthesized by a plurality of pictures shot at the same time has richer details and higher definition, is convenient for subsequent pixel enhancement processing, and reduces the subsequent processing difficulty.
Specifically, the time corresponding to the first image and the time corresponding to the second image are the same, and the specific time is the same, for example, in the order of milliseconds, that is, in the same millisecond, the time is considered to be the same, and other time, for example, microseconds, nanoseconds, and the like, may also be set according to needs.
In addition, the image area captured by the first image is greater than or equal to the image area captured by the second image, specifically, for example, the image area in the second image includes an a area, a B area; in this case, the image area in the first image needs to include at least the a area, the B area.
And the step of overlapping the first image with a second image acquired by the camera in normal shooting and determining the region to be enhanced in the second image according to the image contour comprises the following steps:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
Specifically, the time information of the first image and the second image are consistent, after the alignment processing, the image areas in the first image and the second image are the same, for example, both include and only include an a area, and the image contour in the first image is, for example, a cross-shaped area located at the center of the a area; in particular, this means that the area to be reinforced in the second pattern is also the center of the a-area, in the form of a cross-shaped area.
And 104, carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
After the region to be enhanced is determined, pixel enhancement processing, specifically, for example, adjustment of contrast, sharpening processing, and the like, may be performed on the enhanced region, and the specific processing manner may also be processing by using an existing pixel enhancement processing manner, so as to obtain a clear enhanced new picture.
Example 2
Embodiment 2 of the present invention further discloses an apparatus for generating an image, as shown in fig. 4, including:
a first obtaining module 201, configured to obtain a brightness value of a light ray of an environment where the mobile terminal is located;
a second obtaining module 202, configured to obtain a first image including an image contour when the brightness value is smaller than a preset threshold;
the determining module 203 is configured to superimpose the first image and a second image obtained by normal shooting of a camera of the mobile terminal, and determine a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and the enhancing module 204 is configured to perform pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
In a specific embodiment, the first obtaining module 201 is configured to:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
In a specific embodiment, the camera includes: the camera comprises a first camera and/or a second camera;
the first obtaining module 201 is configured to:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
In a specific embodiment, the apparatus further comprises: a correction module to: acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
In a particular embodiment, the image contour comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the second acquiring module 202 "acquiring the first image containing the image contour" includes:
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
In a specific embodiment, the second obtaining module 202 "identifying the outline of each scene in the first image to obtain the outer edge boundary of the outline of each scene in the first image" includes:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
In a specific embodiment, the second obtaining module 202 "activating the low-light night vision device to obtain the first image" includes:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
In a specific embodiment, the second image acquired by the camera in normal shooting is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
In a specific embodiment, the determining module 203 is configured to:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
Example 3
Embodiment 3 of the present invention further discloses a mobile terminal, as shown in fig. 5, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present invention. The mobile terminal may be any mobile terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, etc., taking the mobile terminal as the mobile phone as an example:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (WiFi) module 1570, processor 1580, and power supply 1590. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
the RF circuit 1510 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the downlink information to the baseband processor 1581; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger or a stylus) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 1540 may include a Display panel 1541, and optionally, the Display panel 1541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch operation is transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of the touch event. Although in fig. 4, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the mobile phone.
The handset can also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that turns off the display panel 1541 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and a cell phone. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into an audio signal by the speaker 1561 and output the audio signal; on the other hand, the microphone 1562 converts collected sound signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are processed by the audio data output processor 1580 and then passed through the RF circuit 1510 for transmission to, for example, another cellular phone, or for output to the memory 1520 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a WiFi module 1570, and provides wireless broadband internet access for the user. Although fig. 5 shows WiFi module 1570, it is understood that it does not belong to the essential components of the handset and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the mobile phone. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may incorporate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like. The baseband processor 1581 mainly functions to perform baseband encoding/decoding, voice encoding, and the like, and the baseband processor 1581 may integrate a modem processor, or may not integrate the modem processor into the baseband processor 1581. It is to be appreciated that the baseband processor 1581 may also be integrated within the processor 1580.
The handset also includes a power supply 1590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1580 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment of the present invention, the processor 1580 included in the terminal is configured to:
acquiring the brightness value of light of the environment where the mobile terminal is located;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
In a specific embodiment, the obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
In a specific embodiment, the camera comprises a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
In a specific embodiment, the processor is further configured to:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
In a particular embodiment, the image contour comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
In a specific embodiment, the "recognizing the external shape of each scene in the first image to obtain the outer edge boundary of the external shape of each scene in the first image" includes:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
In a specific embodiment, the controlling the activation of the low-light night vision device to acquire the first image includes:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
In a specific embodiment, the second image acquired by the camera in normal shooting is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
In a specific embodiment, the "superimposing the first image with the second image acquired by the camera, and determining the region to be enhanced in the second image according to the image contour" includes:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
Therefore, the embodiment of the invention provides a method and equipment for generating pictures and a mobile terminal, wherein the method comprises the following steps: acquiring the brightness value of light of the environment where the mobile terminal is located; if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour; superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image; and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture. Therefore, the region to be enhanced in the image is determined through the image contour, and the pixel enhancement processing is performed on the region to be enhanced in a targeted manner, so that the definition of the image is improved, and the processing workload is reduced.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned invention numbers are merely for description and do not represent the merits of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.
The embodiment of the invention also discloses:
a1, a method for generating pictures, comprising:
acquiring the brightness value of light of the environment where the mobile terminal is located;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
A2, the method as in A1, the obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
A3, the method of A1, the camera comprising a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
A4, the method of A3, further comprising:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
A5, the method of A1, the image contour including an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
A6, the method as in a5, wherein the identifying the outline of each scene in the first image to obtain the outer edge boundary of the outline of each scene in the first image comprises:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
A7, the method as in a5, the controlling to activate the low-light night vision device to acquire the first image includes:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
A8, the method of A1, wherein the second image acquired by the camera of the mobile terminal through normal shooting is generated by the following steps:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
A9, the method as in A1, wherein the step of overlapping the first image with a second image acquired by the camera in normal shooting comprises the following steps of:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
B10, an apparatus for picture generation, comprising:
the first acquisition module is used for acquiring the brightness value of the light of the environment where the mobile terminal is located;
the second acquisition module is used for acquiring a first image containing an image outline when the brightness value is smaller than a preset threshold value;
the determining module is used for overlapping the first image with a second image acquired by a camera of the mobile terminal through normal shooting, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and the enhancement module is used for carrying out pixel enhancement processing on the region to be enhanced so as to obtain a clear enhanced new picture.
B11, the apparatus as in B10, the first obtaining module to:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
B12, the apparatus as in B10, the camera comprising: the camera comprises a first camera and/or a second camera;
the first obtaining module is configured to:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
B13, the apparatus of B12, further comprising: a correction module to: acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
B14, the device as in B10, the image contour including an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the second acquiring module "acquiring the first image containing the image contour" includes:
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
B15, the apparatus as in B14, wherein the second acquiring module "recognizing the outline of each scene in the first image to acquire the outer edge boundary of the outline of each scene in the first image" comprises:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
B16, the apparatus as in B14, the second acquiring module "enabling the low-light night vision device to acquire the first image" comprising:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
B17, the device as B10, the second image acquired by the camera head normal shooting is generated by the following method:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
B18, the apparatus of B10, the determining module to:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
C19, a mobile terminal, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring the brightness value of light of the environment where the mobile terminal is located;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
and carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture.
C20, the mobile terminal according to C19, the obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
C21, the mobile terminal as C19, the camera comprises a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
C22, the mobile terminal as in C21, the processor further configured to:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
and correcting the brightness value based on the historical record brightness value.
C23, the mobile terminal as C19, the image outline includes the outer edge boundary of the scene in the image and the area included by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
C24, the mobile terminal according to C23, wherein the "recognizing the outline of each scene in the first image to obtain the outer edge boundary of the outline of each scene in the first image" comprises:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
C25, the mobile terminal as described in C23, the controlling to activate the low-light night vision device to acquire the first image comprises:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
C26, the mobile terminal as described in C19, the second image obtained by the camera shooting normally is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
C27, the mobile terminal as in C19, wherein the "superimposing the first image with the second image acquired by the camera, determining the region to be enhanced in the second image according to the image contour" comprises:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.

Claims (24)

1. A method of picture generation, comprising:
acquiring the brightness value of light of the environment where the mobile terminal is located;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture;
the method for obtaining the brightness value of the light of the environment where the mobile terminal is located comprises the following steps:
acquiring the geographic position and the current time of the mobile terminal;
querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value;
correcting the brightness value based on the historical record brightness value;
wherein the image contour comprises an outer edge boundary of a scene in an image and a region included by the outer edge boundary, and the first image is shot by a low-light night vision device of the mobile terminal.
2. The method according to claim 1, wherein the obtaining the brightness value of the light of the environment where the mobile terminal is located comprises:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
3. The method of claim 1, wherein the camera comprises a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
4. The method of claim 1, wherein the image outline comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
5. The method of claim 4, wherein said identifying the outline of each scene in the first image to obtain the outer edge boundary of the outline of each scene in the first image comprises:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
6. The method of claim 4, wherein controlling activation of the low-light night vision device to acquire the first image comprises:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
7. The method of claim 1, wherein the second image acquired by the camera of the mobile terminal by normally capturing is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
8. The method of claim 1, wherein the superimposing the first image with a second image obtained by normal shooting by the camera, the determining the region to be enhanced in the second image according to the image contour comprises:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
9. An apparatus for picture generation, comprising:
the first acquisition module is used for acquiring the brightness value of the light of the environment where the mobile terminal is located;
the second acquisition module is used for acquiring a first image containing an image outline when the brightness value is smaller than a preset threshold value;
the determining module is used for overlapping the first image with a second image acquired by a camera of the mobile terminal through normal shooting, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
the enhancement module is used for carrying out pixel enhancement processing on the region to be enhanced so as to obtain a clear enhanced new picture;
the correction module is used for acquiring the geographic position and the current time of the mobile terminal; querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value; correcting the brightness value based on the historical record brightness value;
wherein the image contour comprises an outer edge boundary of a scene in an image and a region included by the outer edge boundary, and the first image is shot by a low-light night vision device of the mobile terminal.
10. The device of claim 9, wherein the first acquisition module is to:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal; based on processing the electrical signal, a brightness value is obtained.
11. The apparatus of claim 9, wherein the camera comprises: the camera comprises a first camera and/or a second camera;
the first obtaining module is configured to:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
12. The apparatus of claim 9, wherein the image outline comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the second acquiring module "acquiring the first image containing the image contour" includes:
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
13. The apparatus of claim 12, wherein the second acquiring module identifying the outline of the scenes in the first image to acquire the outer edge boundary of the outline of the scenes in the first image comprises:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
14. The apparatus of claim 12, wherein the second acquiring module "enabling the low-light night vision device to acquire the first image" comprises:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
15. The apparatus of claim 9, wherein the second image acquired by the camera normal shot is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
16. The device of claim 9, wherein the determination module is to:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
17. A mobile terminal, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
acquiring the brightness value of light of the environment where the mobile terminal is located; the processor is further configured to obtain a geographic location and a current time of the mobile terminal; querying a historical brightness database corresponding to the geographic position based on the acquired geographic position and the current time to acquire a historical record brightness value; correcting the brightness value based on the historical record brightness value;
if the brightness value is smaller than a preset threshold value, acquiring a first image containing an image contour;
superposing the first image and a second image acquired by normal shooting of a camera of the mobile terminal, and determining a region to be enhanced in the second image according to the image contour; the time corresponding to the first image is the same as the time corresponding to the second image, and the image area corresponding to the first image is larger than or equal to the image area in the second image;
carrying out pixel enhancement processing on the region to be enhanced to obtain a clear enhanced new picture;
wherein the image contour comprises an outer edge boundary of a scene in an image and a region included by the outer edge boundary, and the first image is shot by a low-light night vision device of the mobile terminal.
18. The mobile terminal of claim 17, wherein said obtaining the brightness value of the light of the environment where the mobile terminal is located comprises:
acquiring an optical signal of the current environment of the mobile terminal through a light sensor;
converting the optical signal into an electrical signal;
based on processing the electrical signal, a brightness value is obtained.
19. The mobile terminal of claim 17, wherein the camera comprises a first camera and/or a second camera;
the step of obtaining the brightness value of the light of the environment where the mobile terminal is located includes:
pre-shooting through the first camera and/or the second camera to obtain a pre-shot image;
performing brightness analysis processing on the pre-shot image to acquire the brightness value of each pixel in the pre-shot image;
and determining the brightness value of the light of the current environment of the mobile terminal based on the brightness value of the pixel.
20. The mobile terminal of claim 17, wherein the image outline comprises an outer edge boundary of a scene in the image and a region encompassed by the outer edge boundary;
the step of acquiring the first image containing the image contour comprises the following steps:
controlling to start a low-light night vision device to acquire a first image;
identifying the appearance of each scene in the first image to obtain the outer edge boundary line of the appearance of each scene in the first image;
an image contour corresponding to each scene is determined based on the acquired outer edge boundary lines.
21. The mobile terminal of claim 20, wherein said identifying the outline of each scene in the first image to obtain the outer boundary line of the outline of each scene in the first image comprises:
and identifying the outline of each scene in the first image by a processing mode of adaptive spatial domain filtering so as to obtain the outer edge boundary line of the outline of each scene in the first image.
22. The mobile terminal of claim 20, wherein the controlling activation of the low-light night vision device to acquire the first image comprises:
controlling to start the low-light night vision device to shoot at the same time so as to obtain a plurality of pictures;
the acquired plurality of pictures are subjected to synthesis processing to generate a first image.
23. The mobile terminal of claim 17, wherein the second image acquired by the camera normal shot is generated by:
controlling to start the camera to shoot at the same time to generate a plurality of pictures;
the generated plurality of photographs are subjected to synthesis processing to generate a second image.
24. The mobile terminal according to claim 17, wherein said "superimposing the first image with a second image obtained by the camera, determining a region to be enhanced in the second image according to the image contour" comprises:
selecting a first image and a second image which have consistent time information and are subjected to alignment processing to be superposed;
determining the orthographic projection of the image contour in the first image on the second image after image superposition;
setting the orthographic projection area as an area to be enhanced in the second image.
CN201710217678.9A 2017-04-05 2017-04-05 Picture generation method and equipment and mobile terminal Expired - Fee Related CN106851119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710217678.9A CN106851119B (en) 2017-04-05 2017-04-05 Picture generation method and equipment and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710217678.9A CN106851119B (en) 2017-04-05 2017-04-05 Picture generation method and equipment and mobile terminal

Publications (2)

Publication Number Publication Date
CN106851119A CN106851119A (en) 2017-06-13
CN106851119B true CN106851119B (en) 2020-01-03

Family

ID=59142238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710217678.9A Expired - Fee Related CN106851119B (en) 2017-04-05 2017-04-05 Picture generation method and equipment and mobile terminal

Country Status (1)

Country Link
CN (1) CN106851119B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566746B (en) * 2017-09-12 2020-09-04 广东小天才科技有限公司 Photographing method and user terminal
CN108012078B (en) * 2017-11-28 2020-03-27 Oppo广东移动通信有限公司 Image brightness processing method and device, storage medium and electronic equipment
CN110458909A (en) * 2019-08-05 2019-11-15 薄涛 Handle method, server, tutoring system and the medium of projected image
CN113347490B (en) * 2020-02-18 2022-08-16 RealMe重庆移动通信有限公司 Video processing method, terminal and storage medium
CN112118394B (en) * 2020-08-27 2022-02-11 厦门亿联网络技术股份有限公司 Dim light video optimization method and device based on image fusion technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227855A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN105303543A (en) * 2015-10-23 2016-02-03 努比亚技术有限公司 Image enhancement method and mobile terminal
CN105654436A (en) * 2015-12-24 2016-06-08 广东迅通科技股份有限公司 Backlight image enhancement and denoising method based on foreground-background separation
WO2016088293A1 (en) * 2014-12-04 2016-06-09 Sony Corporation Imaging device, apparatus, and imaging method
CN106056594A (en) * 2016-05-27 2016-10-26 四川桑莱特智能电气设备股份有限公司 Double-spectrum-based visible light image extraction system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI239209B (en) * 2004-04-08 2005-09-01 Benq Corp A specific image extraction method, storage medium and image pickup device using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016088293A1 (en) * 2014-12-04 2016-06-09 Sony Corporation Imaging device, apparatus, and imaging method
CN105227855A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN105303543A (en) * 2015-10-23 2016-02-03 努比亚技术有限公司 Image enhancement method and mobile terminal
CN105654436A (en) * 2015-12-24 2016-06-08 广东迅通科技股份有限公司 Backlight image enhancement and denoising method based on foreground-background separation
CN106056594A (en) * 2016-05-27 2016-10-26 四川桑莱特智能电气设备股份有限公司 Double-spectrum-based visible light image extraction system and method

Also Published As

Publication number Publication date
CN106851119A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
US11330194B2 (en) Photographing using night shot mode processing and user interface
CN106851119B (en) Picture generation method and equipment and mobile terminal
CN106558025B (en) Picture processing method and device
CN107506732B (en) Method, device, mobile terminal and computer storage medium for mapping
CN108038825B (en) Image processing method and mobile terminal
CN107124556B (en) Focusing method, focusing device, computer readable storage medium and mobile terminal
KR20160021238A (en) Method and terminal for acquiring panoramic image
CN107730460B (en) Image processing method and mobile terminal
CN109409235B (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN106993136B (en) Mobile terminal and multi-camera-based image noise reduction method and device thereof
CN114710585A (en) Photographing method and terminal
CN110209245A (en) Face identification method and Related product
CN108512625A (en) Anti-interference method, mobile terminal and the storage medium of camera
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN108650466A (en) The method and electronic equipment of photo tolerance are promoted when a kind of strong light or reversible-light shooting portrait
CN110187769B (en) Preview image viewing method, equipment and computer readable storage medium
CN110177209B (en) Video parameter regulation and control method, device and computer readable storage medium
CN110099218B (en) Interactive control method and device in shooting process and computer readable storage medium
CN106688305B (en) Intelligent matching method and terminal of filter
CN108900779A (en) Initial automatic exposure convergence method, mobile terminal and computer readable storage medium
CN110177208B (en) Video recording association control method, equipment and computer readable storage medium
CN110069136B (en) Wearing state identification method and equipment and computer readable storage medium
CN110971822A (en) Picture processing method and device, terminal equipment and computer readable storage medium
CN106851023B (en) Method and equipment for quickly making call and mobile terminal
CN107257430B (en) A kind of camera control method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200103

Termination date: 20210405