CN106878606B - Image generation method based on electronic equipment and electronic equipment - Google Patents

Image generation method based on electronic equipment and electronic equipment Download PDF

Info

Publication number
CN106878606B
CN106878606B CN201510918037.7A CN201510918037A CN106878606B CN 106878606 B CN106878606 B CN 106878606B CN 201510918037 A CN201510918037 A CN 201510918037A CN 106878606 B CN106878606 B CN 106878606B
Authority
CN
China
Prior art keywords
image
camera
sub
target
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510918037.7A
Other languages
Chinese (zh)
Other versions
CN106878606A (en
Inventor
何松
李汉华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201510918037.7A priority Critical patent/CN106878606B/en
Publication of CN106878606A publication Critical patent/CN106878606A/en
Application granted granted Critical
Publication of CN106878606B publication Critical patent/CN106878606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Abstract

The embodiment of the invention provides an image generation method based on electronic equipment and the electronic equipment, wherein the electronic equipment at least comprises first image acquisition equipment and second image acquisition equipment; the method comprises the following steps: starting the first image acquisition device and the second image acquisition device; shooting different target objects by respectively adopting the first image acquisition equipment and the second image acquisition equipment to obtain corresponding first images and second images; and synthesizing the first image and the second image into a target image. The embodiment of the invention can reduce the system consumption of the electronic equipment and improve the processing capacity of the picture.

Description

Image generation method based on electronic equipment and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image generating method based on an electronic device and an electronic device for generating an image.
Background
At present, the photographing function becomes one of essential basic functions of consumer electronic equipment such as mobile phones and the like, and more electronic equipment is provided with a front camera and a rear camera, so that a plurality of applications related to photographing and video recording are introduced, and the life, entertainment and work of people are greatly enriched. The rear camera can be used for taking pictures to leave the intravenous drip of life, and the front camera is often used for taking self-timer pictures, video chatting and other related applications.
In the prior art, if a user wants to synthesize a picture acquired by a front camera and a picture acquired by a rear camera, the user needs to introduce the picture acquired by the front camera and the picture acquired by the rear camera into third-party image processing software respectively, perform picture synthesis processing in the third-party image processing software, and then export the synthesized picture to an electronic device by the third-party image processing software. The frequent import and export process will undoubtedly increase the system overhead of the electronic device, and the picture processing efficiency is low.
Disclosure of Invention
In view of the above, the present invention has been made to provide an image generating method based on an electronic device and a corresponding electronic device for generating an image, which overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided an image generating method based on an electronic device, the electronic device including at least a first image capturing device and a second image capturing device;
the method comprises the following steps:
starting the first image acquisition device and the second image acquisition device;
shooting different target objects by respectively adopting the first image acquisition equipment and the second image acquisition equipment to obtain corresponding first images and second images;
and synthesizing the first image and the second image into a target image.
Optionally, the first image capturing device and the second image capturing device are respectively located in different shooting planes.
Optionally, the first image acquisition device is a front camera, and the second image acquisition device is a rear camera.
Optionally, the step of synthesizing the first image and the second image into the target image includes:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
and adding the adjusted characteristic information to a specified area of the second image to generate a target image.
Optionally, the designated area is an area within the depth range of the second image, the step of adding the adjusted feature information to the designated area of the second image to generate the target image includes:
and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
Optionally, before the step of adding the adjusted feature information as a new layer to an area within the depth range of the second image and generating a target image, the method further includes:
and performing adjustment processing on the second image, wherein the adjustment processing comprises reducing the depth range of the second image.
Optionally, the feature information includes character feature information.
Optionally, the target object includes a first target object and a second target object, and the step of respectively shooting with the first image capturing device and the second image capturing device for different target objects to obtain corresponding first images and second images includes:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
Optionally, the second image capturing device is a dual-camera device, and includes a first sub-camera and a second sub-camera located on the same shooting plane and adjacent to each other, the second image capturing device is used to shoot the second target object, and the step of obtaining the second image includes:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
Optionally, the third image is a color image, and the fourth image is a black-and-white image.
Optionally, the first sub-camera comprises a color RGBW sensor and the second sub-camera comprises a black and white night vision sensor.
Optionally, the method further comprises:
and outputting the target image.
According to another aspect of the present invention, there is provided an electronic device for generating an image, the electronic device comprising at least a first image capturing device and a second image capturing device;
the electronic device further includes:
the starting module is suitable for starting the first image acquisition device and the second image acquisition device;
the shooting module is suitable for respectively adopting the first image acquisition device and the second image acquisition device to shoot different target objects to obtain corresponding first images and second images;
and the synthesis module is suitable for synthesizing the first image and the second image into a target image.
Optionally, the first image capturing device and the second image capturing device are respectively located in different shooting planes.
Optionally, the first image acquisition device is a front camera, and the second image acquisition device is a rear camera.
Optionally, the synthesis module is further adapted to:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
and adding the adjusted characteristic information to a specified area of the second image to generate a target image.
Optionally, the synthesis module is further adapted to:
and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
Optionally, the electronic device further comprises:
and the adjusting processing module is suitable for adjusting the second image, and the adjusting processing comprises reducing the depth of field range of the second image.
Optionally, the feature information includes character feature information.
Optionally, the target object includes a first target object and a second target object, and the photographing module is further adapted to:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
Optionally, the second image capturing device is a dual-camera device, and includes a first sub-camera and a second sub-camera located on the same shooting plane and adjacent to each other, and the shooting module is further adapted to:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
Optionally, the third image is a color image, and the fourth image is a black-and-white image.
Optionally, the first sub-camera comprises a color RGBW sensor and the second sub-camera comprises a black and white night vision sensor.
Optionally, the electronic device further comprises:
and the output module is suitable for outputting the target image.
According to the image generation method based on the electronic equipment and the electronic equipment, after the first image acquisition equipment and the second image acquisition equipment in the electronic equipment are started, different target objects can be respectively photographed to obtain the corresponding first image and second image, and the first image and the second image are automatically synthesized into the target image, so that a user does not need to introduce two images into third-party image processing software for image processing, the user operation is simplified, and the automation degree is higher.
Furthermore, because the picture does not need to be additionally imported or exported into or from third-party picture processing software, the system consumption of the electronic equipment can be reduced, and the picture processing capability can be improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart illustrating a first embodiment of an image generation method based on an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a second embodiment of an electronic device based image generation method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a third embodiment of an image generation method based on an electronic device according to an embodiment of the present invention; and
fig. 4 is a block diagram of an embodiment of an electronic device for generating an image according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Referring to fig. 1, a flowchart illustrating a first step of an embodiment of an image generation method based on an electronic device according to an embodiment of the present invention is shown, wherein the electronic device may include at least a first image capturing device and a second image capturing device; the embodiment of the invention specifically comprises the following steps:
step 101, starting the first image acquisition device and the second image acquisition device;
step 102, shooting different target objects by using the first image acquisition device and the second image acquisition device respectively to obtain corresponding first images and second images;
and 103, synthesizing the first image and the second image into a target image.
In the embodiment of the invention, after the first image acquisition device and the second image acquisition device in the electronic device are started, different target objects can be respectively photographed to obtain the corresponding first image and second image, and the first image and the second image are automatically synthesized into the target image, so that a user does not need to introduce two pictures into third-party picture processing software for picture processing, the user operation is simplified, and the degree of automation is higher.
Furthermore, because the picture does not need to be additionally imported or exported into or from third-party picture processing software, the system consumption of the electronic equipment can be reduced, and the picture processing capability can be improved.
Referring to fig. 2, a flowchart illustrating steps of an embodiment two of an image generating method based on an electronic device according to an embodiment of the present invention is shown, which can be applied to an electronic device integrated with an image capturing device such as a camera, including a smart phone, a tablet, a Personal Digital Assistant (PDA), a camera, and the like. Further, the electronic device may further include a display screen, wherein an image capturing device in the electronic device is used for realizing photographing and shooting functions, and the display screen is used for realizing a preview function of a shot picture, that is, a picture currently received by the camera is displayed in real time for a user to preview, so that an effect of the viewfinder is achieved.
The electronic equipment at least comprises a first image acquisition device and a second image acquisition device.
Further, in a preferred embodiment, the first image capturing device and the second image capturing device are located in different shooting planes, for example, the first image capturing device may be located above a display screen, and the second image capturing device may be located above a plane of the back of the electronic device.
Further, in a preferred embodiment, the first image capturing device is a front camera, and the second image capturing device is a rear camera.
The embodiment of the invention specifically comprises the following steps:
step 201, starting the first image acquisition device and the second image acquisition device;
in a specific implementation, a process of an operating system running of the electronic device may be monitored, and when it is monitored that a "camera" (or other similar name, i.e., an application program that implements a photographing function) application program in the operating system is called to a foreground to run, the first image capturing device and the second image capturing device of the electronic device may be started.
After the first image capturing device and the second image capturing device are started, whether a shutter instruction is detected or not can be further judged, and when the shutter instruction is detected, the steps 202 and 203 are triggered to be executed.
In a specific implementation, the issuing manner of the shutter instruction includes, but is not limited to, the following: the shutter instruction is sent by pressing a physical key of the electronic equipment, and accordingly, the sending of the shutter instruction needs to be determined by detecting whether a circuit connected with the physical key is communicated or not; or, a shutter instruction is issued by pressing a virtual key on a touch screen of the electronic device, and accordingly, the issue of the shutter instruction needs to be determined by detecting whether a circuit of a screen area where the virtual key is located is conducted; or, the shutter command is issued by voice, accordingly, the voice data is received by a microphone device of the terminal, and the voice data is analyzed to determine the issue of the shutter command.
Step 202, shooting the first target object by using the first image acquisition equipment to obtain a first image;
in a specific implementation, after the first image acquisition device is started, the first target object can be photographed, an image photographed by the first image acquisition device generates an optical image through a lens of the first image acquisition device, the optical image is projected onto the sensor, the optical image is converted into an electric signal, the electric signal is converted into a digital signal through analog-to-digital conversion, the digital signal is processed by the DSP, and the digital signal is sent to the electronic device processor for processing, so that the first image is obtained.
Generally, the front camera mainly represents shooting a person, and therefore, the first target object may be a target object including a person object.
Step 203, shooting the second target object by using the second image acquisition device to obtain a second image;
in a particular implementation, the second target object is not the same as the first target object. Since the shooting scene of the rear camera is wide and can be represented by scenery, people, sports, night scenes and the like, the second target object can include target objects such as scenery, people, sports, night scenes and the like.
After the second image acquisition equipment is started, the second target object can be shot, the picture shot by the second image acquisition equipment generates an optical image through a lens of the second image acquisition equipment, the optical image is projected onto the sensor, the optical image is converted into an electric signal, the electric signal is converted into a digital signal through analog-to-digital conversion, the digital signal is processed through the DSP, and the digital signal is sent to the electronic equipment processor for processing, so that a second image is obtained.
And 204, synthesizing the first image and the second image into a target image.
After the first image and the second image are obtained, the first image can be fused into the second image as much as possible to generate a target image, so that a user can experience more naturally and excellently.
In a specific implementation, a synthesis function switch may be newly added in a setting interface of the electronic device, and the synthesis function switch may be set to be turned on by default or to be turned off by default in advance. After the camera application program is started, if the synthesis function of the camera program is detected to be started, and after the front camera and the rear camera are continuously adopted to shoot the first image and the first image, the first image and the second image are synthesized.
In another embodiment, the composition process may also be triggered by a composition instruction. The composition instruction may be given by pressing a designated physical key or virtual key in the electronic device.
In a preferred embodiment of the present invention, step 204 further comprises the following sub-steps:
a substep S11 of identifying feature information in the first image;
in a specific implementation, the user may specify in advance the feature information to be identified in the first image. Since the first image is generally an image including a human object, the feature information may preferably be human feature information.
Further, the character feature information may include at least one character part content, such as an avatar, a body part, and the like.
In one embodiment, whether the first image has the characteristics of the person or not can be automatically detected based on the image recognition technology according to the preview frame data of the first image acquisition device, and then the outline of the person can be recognized.
In another embodiment, the user can manually select the character feature information in the first image through the touch instruction according to actual requirements, and then the electronic device can identify the character feature information manually selected by the user.
Note that, in addition to the character feature information, the feature information may be other feature information, such as animal feature information, plant feature information, article feature information, and the like.
A substep S12 of extracting the feature information;
after the feature information is identified, the feature information can be further extracted, and the feature information is used as a basis for subsequent adjustment or combination.
A substep S13 of performing adjustment processing on the characteristic information;
in a specific implementation, since the maximum resolution of the front camera is generally smaller than the maximum resolution of the rear camera, for example, the resolution of the front camera is 10M, and the resolution ratio of the rear camera is 20M, when the resolution difference is large, the ratio of the image of the first image output by the front camera to the image of the second image output by the rear camera is small, and the user looks unnatural, the feature information extracted from the first image may be adjusted, so that the feature information after the adjustment process is closer to the image quality of the second image, that is, the parameter of the first image after the adjustment process is closer to the parameter of the second image, for example, the difference or ratio of the parameter of the first image after the adjustment process to the parameter of the second image is smaller than a predetermined threshold.
Further, as an example, the parameters of the first image and the parameters of the second image may include resolution, viewing angle, size, scale, color, sensitivity, exposure time, sharpness, and/or the like.
In an embodiment, the adjustment processing on the feature information may be to improve the pixel point and the resolution of the feature information, for example, the following method may be adopted to improve the pixel point and the resolution of the feature information: the characteristic information of a plurality of frames of first images is extracted and merged to obtain the characteristic information of a larger pixel, so that the image quality is matched with the second image as much as possible.
In addition, the adjustment processing of the feature information may be: the visual angle, size, proportion, color, light sensitivity, exposure time and/or sharpness of the characteristic information are automatically or manually adjusted, and clipping can be carried out, so that more natural and satisfactory effects are obtained.
Specifically, considering that the luminance difference is large and the effect after combining images is poor because the luminance of the front and rear cameras is likely to be different, for example, one (camera) tends to be backlight outdoors and the other is backlight outdoors, the luminance of the feature information extracted from the first image can be adjusted before combining, so that the luminance of the combined images is matched as much as possible.
In addition, because shooting scenes are possibly inconsistent, some fine adjustment needs to be performed on the colors of the feature information, abnormal colors at the merging position are avoided, and meanwhile, a series of fusion operations including optimization of edge brightness, sharpness and the like are performed on the edges of the feature information, so that the effect is more natural. Meanwhile, the related information of the front camera and the rear camera, such as the information of resolution, light sensitivity, exposure time and the like, can be kept, and subsequent editing and adjustment are facilitated.
In a preferred embodiment of the present invention, the second image may be further subjected to an adjustment process. For example, the adjusting the second image may include: and automatically detecting and identifying the shooting scene of the second image, and identifying the shooting scene, such as scenery, people, sports, night scenes and the like.
In order to make the merged image more natural, the adjusting of the second image according to the embodiment of the present invention may further include reducing a depth of field of the second image, so as to achieve an effect of blurring the background.
The embodiment of the invention can adjust the characteristic information and/or the second image, so that the adjusted characteristic information is closer to the image quality of the second image, the images of the front camera and the rear camera are more naturally fused together, and the quality of the synthesized image is improved.
And a substep S14 of adding the characteristic information after the adjustment processing to a designated region of the second image to generate a target image.
In a preferred embodiment of the present invention, the sub-step S14 may further be: and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
After the feature information and/or the second image are/is adjusted, the adjusted feature information may be used as foreground information, the second image may be used as background information, and the feature information may be added to a specified area of the second image as a new image layer to generate a target image.
In a particular implementation, the designated area of the second image may be an area within a depth of field of the second image. Alternatively, the designated area may also be an area that the user manually designates in the second image.
In the embodiment of the invention, the image synthesis of the front camera and the rear camera can be realized on the basis of the hardware of the existing electronic equipment, so that the content of the synthesized target image is more accurate and meets the requirements of users, the synthesis processing by third-party image processing software is not needed, the import or export operation of the image in the third-party image processing software is reduced, the image synthesis speed is increased, the electronic equipment resource is saved, and the system overhead of the electronic equipment is reduced.
Referring to fig. 3, a flowchart illustrating steps of an embodiment three of an image generating method based on an electronic device according to an embodiment of the present invention is shown, which can be applied to an electronic device integrated with an image capturing device such as a camera, including a smart phone, a tablet, a Personal Digital Assistant (PDA), a camera, and the like. Further, the electronic device may further include a display screen, wherein an image capturing device in the electronic device is used for realizing photographing and shooting functions, and the display screen is used for realizing a preview function of a shot picture, that is, a picture currently received by the camera is displayed in real time for a user to preview, so that an effect of the viewfinder is achieved.
The electronic equipment at least comprises a first image acquisition device and a second image acquisition device.
Further, in a preferred embodiment, the first image capturing device and the second image capturing device are located in different shooting planes, for example, the first image capturing device may be located above a display screen, and the second image capturing device may be located above a plane of the back of the electronic device.
Further, in a preferred embodiment, the first image capturing device is a front camera, and the second image capturing device is a rear camera.
Furthermore, the second image capturing device may be a dual-camera device, and includes a first sub-camera and a second sub-camera located on the same shooting plane and adjacent to each other, where the first sub-camera is configured to capture a third image, and the second sub-camera is configured to capture a fourth image.
Further, the third image may be a color image; the fourth image may be a black and white image.
Specifically, in the embodiment of the present invention, the first sub-camera includes a color RGBW sensor, which is responsible for color and can be used for collecting a color image; the second sub-camera comprises a black-and-white night vision sensor, is responsible for the depth of field of details such as contour, detail and brightness, and can be used for collecting black-and-white images.
RGBW differs most from RGB in that the filter array immediately before the sensor is not the same. The RGBW technique is to add a W white pixel to the original RGB red (R), green (G), and blue (B) three primary colors, and thus, the RGBW technique is designed as a four-color pixel. W is a white area and is free of a filter, so that the input quantity of light is 3-4 times that of other areas, the light incoming quantity of pixel points under low illumination is obviously improved, and noise is reduced. Experiments prove that the first sub-camera adopts an RGBW sensor, so that the brightness of a high-contrast environment can be improved by 40%, and the noise of a low-light environment can be reduced by 78%. In addition, an independent gray scale sensing area can be provided in the W area of the RGBW sensor, and accurate grabbing of gray scales can be realized during low illumination, namely more accurate low light white balance can be realized.
Black and white night vision sensor, MONO sensor is a full light-transmitting sensor, and all light all directly reaches the sensor and is caught, and it adopts stack formula structure, and the advantage possesses higher purity under the low light environment, and this lets equipment still can be leisurely dealt with in the environment that light is not enough.
In a particular implementation, the color RGBW sensor includes a color filter, while the black and white night vision sensor does not include a color filter.
Specifically, each camera is internally integrated with a photosensitive element, dense and rough pixels are distributed on the photosensitive element, but each pixel can only sense the intensity of light rays but cannot sense colors, so that the purpose of identifying colors can be achieved by adding a color filter lens (namely a color filter lens) on the top of the pixel to enable the light rays of specified colors to irradiate on the pixel, but the color filter lens blocks most of the light rays, so that the sensitivity of the sensor, namely ISO (international organization) standard) needs to be improved under the condition of dark light, but noise which is difficult to avoid is brought. On the other hand, if the filter lens of the camera is removed, although the camera cannot sense the color of light, the camera becomes a black-and-white image acquisition device, each pixel point can obtain an electric signal, and simultaneously, as the light can be completely received by the sensor, the more the light entering amount is, the clearer the image is, the more the details are, and experiments show that the light entering amount of the black-and-white night vision sensor without the filter lens is 4 times of the light entering amount of the common color sensor with the filter lens.
Therefore, in the embodiment of the present invention, the rear camera employs at least two sub-cameras, and adopts a color light separation technique, and one sub-camera (i.e. the first sub-camera) includes a color filter and is responsible for color perception; the other sub-camera does not comprise a color filter (namely, the second sub-camera) and is responsible for collecting light rays, so that a good imaging effect is obtained in a low-light environment.
The embodiment of the invention specifically comprises the following steps:
step 301, starting the first image acquisition device and the second image acquisition device;
in a specific implementation, a process of an operating system running of the electronic device may be monitored, and when it is monitored that a "camera" (or other similar name, i.e., an application program that implements a photographing function) application program in the operating system is called to a foreground to run, the first image capturing device and the second image capturing device of the electronic device may be started.
In practice, when the second image acquisition device is started, if the current shooting environment is sufficient in light, only the first sub-camera can be started; if in the night vision environment, can start first sub-camera and second sub-camera simultaneously.
In one embodiment, whether the current shooting scene is a night vision scene may be identified as follows: acquiring a light intensity parameter of a current shooting scene; if the light intensity parameter is smaller than or equal to a preset threshold value, identifying that the current shooting scene is a night vision scene; and if the light intensity parameter is larger than a preset threshold value, identifying that the current shooting scene is not a night vision scene.
Specifically, the light intensity is the brightness of the object after the light is projected on the object, and is referred to as the light intensity. In a specific implementation, the light intensity parameter may be obtained through a photosensitive element in the electronic device, and the obtained light intensity parameter is compared with a preset threshold, if the light intensity parameter is less than or equal to the preset threshold, the current shooting scene may be identified as a night vision scene, otherwise, if the light intensity parameter is greater than the preset threshold, the current shooting scene may be identified as not a night vision scene, that is, the current shooting scene is a daytime scene.
Step 302, shooting the first target object by using the first image acquisition device to obtain a first image;
specifically, after the front camera is started, the first target object can be shot to obtain a first image.
Generally, the front camera mainly represents shooting a person, and therefore, the first target object may be a target object including a person object.
Step 303, shooting the second target object by using the second image acquisition device to obtain a second image;
in a particular implementation, the second target object is not the same as the first target object. Since the shooting scene of the rear camera is wide and can be represented by scenery, people, sports, night scenes and the like, the second target object can include target objects such as scenery, people, sports, night scenes and the like.
In a preferred embodiment of the present invention, if the first sub-camera and the second sub-camera can be activated simultaneously to capture the same second target object in the night vision environment, step 303 may include the following sub-steps:
a substep S21, synchronously shooting the second target object by using the first sub-camera and the second sub-camera respectively to obtain a corresponding third image and a corresponding fourth image;
and a substep S22 of combining the third image and the fourth image into the second image.
In a specific implementation, the first sub-camera may support two focusing modes of phase focusing and contrast focusing. Furthermore, the embodiment of the invention can realize faster focusing according to the principle of triangular distance measurement.
Specifically, the distance of the second target object can be accurately calculated by using a triangular relation formed between the twin-lens camera and the focused second target object and by calculating an included angle between the second target object and the two lenses and assisting the distance between the twin-lens camera according to a triangular distance measuring principle, that is, a first distance from the second target object to the first sub-camera and a second distance from the second target object to the second sub-camera are calculated.
The embodiment of the invention can also be provided with a closed-Loop focusing (Close-Loop Focus) motor which is used as a new technology in the lens of the electronic equipment, so that the focusing precision and speed of the lens of the electronic equipment can be improved to a certain extent, and better photographing experience is brought.
In a preferred embodiment of the present invention, the sub-step S22 further includes the following sub-steps:
a substep S221, focusing the second target object according to the first distance with the aid of the closed-loop focusing motor by using the first sub-camera to obtain a third image;
and a substep S222, focusing the second target object according to the second distance with the aid of the closed-loop focusing motor by using the second sub-camera, and obtaining a fourth image.
In a specific implementation, the principle of in-focus imaging is as follows: the closed-loop focusing motor drives the lens to move forward and backward, so that an image clearly appears on the image sensor, the basic focusing is completed, an image is focused on the imaging element through the optical system, the photoelectric signal on each pixel is converted into a digital signal through the A/D converter, and the digital signal is processed into a digital image through a DSP (digital signal processing).
The embodiment of the invention realizes higher focusing speed and even can realize 0-second ultra-fast focusing by utilizing the principle of triangular distance measurement and assisting a closed-loop focusing motor on the premise of not sacrificing pixel points.
After the third image and the fourth image are obtained, the third image and the fourth image can be synthesized into the second image, and the advantages of the two images are combined to show the second image with better imaging.
Specifically, since the third image is an image focused on color, detailed information such as contour and brightness thereof may be lacking; and the fourth image is an image with attention to contours, details, and brightness. Therefore, pixel points related to the contour, the details and the brightness of the target object in the fourth image can be extracted, and the extracted pixel points are synthesized into the third image, so that the defects of the third image are overcome, and a better imaged image is achieved.
Of course, the way of combining the third image and the fourth image is not limited to the above way, and those skilled in the art may combine the third image and the fourth image in other ways, for example, the same pixels of the third image and the fourth image are overlapped, or the brightness values of each pixel corresponding to the third image and the fourth image are respectively compared and subtracted, so as to eliminate the noise in the third image and ensure that the color, brightness, and contrast of the normal pixels of the third image are not affected.
Step 304, synthesizing the first image and the second image into a target image;
in a preferred embodiment of the present invention, step 304 may further comprise the following sub-steps:
a substep S31 of identifying feature information in the first image;
in a specific implementation, the user may specify in advance the feature information to be identified in the first image. Since the first image is generally an image including a human object, the feature information may preferably be human feature information.
A substep S32 of extracting the feature information;
after the feature information is identified, the feature information can be further extracted, and the feature information is used as a basis for subsequent adjustment or combination.
A substep S33 of performing adjustment processing on the characteristic information;
in one embodiment, the adjustment processing is performed on the feature information, so that the pixel point and the resolution of the feature information can be improved, and the image quality can be matched with the second image as much as possible.
In addition, the adjustment processing of the feature information may be: the visual angle, size, proportion, color, light sensitivity, exposure time and/or sharpness of the characteristic information are automatically or manually adjusted, and clipping can be carried out, so that more natural and satisfactory effects are obtained.
In a preferred embodiment of the present invention, the second image may be further subjected to an adjustment process. For example, the adjusting the second image may include: and automatically detecting and identifying the shooting scene of the second image, and identifying the shooting scene, such as scenery, people, sports, night scenes and the like.
In order to make the merged image more natural, the adjusting of the second image according to the embodiment of the present invention may further include reducing a depth of field of the second image, so as to achieve an effect of blurring the background.
And a substep S34 of adding the characteristic information after the adjustment processing to a designated region of the second image to generate a target image.
In a preferred embodiment of the present invention, the sub-step S34 may further be:
and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
After the feature information and/or the second image are/is adjusted, the adjusted feature information may be used as foreground information, the second image may be used as background information, and the feature information may be added to a specified area of the second image as a new image layer to generate a target image.
In a particular implementation, the designated area of the second image may be an area within a depth of field of the second image. Alternatively, the designated area may also be an area that the user manually designates in the second image.
Step 305, outputting the target image.
Specifically, the outputting may specifically refer to outputting the target image to a storage medium of the electronic device, and in a subsequent operation process, a user may call the target image through a display screen of the electronic device to view the target image, or copy the target image from the storage medium to display or store the target image in another device.
In the embodiment of the invention, the second image acquisition device can be a double-camera device and can comprise a first sub-camera and a second sub-camera, in a night vision scene, the two sub-cameras can be simultaneously adopted to shoot a second target object to respectively obtain a color image and a black-and-white image, and then the black-and-white image and the color image are synthesized into a second image, further post-processing on the shot picture is not needed by a software method, the operation is convenient, the system overhead is reduced, and the imaging effect of the image is improved.
For simplicity of explanation, the method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the embodiments of the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of an embodiment of an electronic device for generating an image according to an embodiment of the present invention is shown, the electronic device including at least a first image capturing device and a second image capturing device; the electronic device may further include the following modules:
a starting module 401 adapted to start the first image capturing device and the second image capturing device;
a shooting module 402, adapted to respectively shoot with the first image capturing device and the second image capturing device for different target objects, so as to obtain a corresponding first image and a corresponding second image;
a synthesizing module 403 adapted to synthesize the first image and the second image into a target image.
In a preferred embodiment of the embodiments of the present invention, the first image capturing device and the second image capturing device are respectively located in different shooting planes.
In a preferred embodiment of the present invention, the first image capturing device is a front camera, and the second image capturing device is a rear camera.
In a preferred embodiment of the present invention, the synthesis module 403 is further adapted to:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
and adding the adjusted characteristic information to a specified area of the second image to generate a target image.
In a preferred embodiment of the present invention, the synthesis module 403 is further adapted to:
and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
In a preferred embodiment of the present invention, the electronic device further includes:
and the adjusting processing module is suitable for adjusting the second image, and the adjusting processing comprises reducing the depth of field range of the second image.
In a preferred embodiment of the present invention, the feature information includes character feature information.
In a preferred embodiment of the present invention, the target objects include a first target object and a second target object, and the capturing module 402 is further adapted to:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
In a preferred embodiment of the present invention, the second image capturing device is a dual-camera device, and includes a first sub-camera and a second sub-camera which are located on the same shooting plane and are adjacent to each other, and the shooting module 402 is further adapted to:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
In a preferred embodiment of the embodiments of the present invention, the third image is a color image, and the fourth image is a black-and-white image.
In a preferred embodiment of the present invention, the first sub-camera comprises a color RGBW sensor, and the second sub-camera comprises a black and white night vision sensor.
In a preferred embodiment of the present invention, the electronic device further includes:
and the output module is suitable for outputting the target image.
For the embodiment of the electronic device, since it is basically similar to the embodiment of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiment of the method.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in an electronic device based image generation device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The invention discloses A1, an image generation method based on electronic equipment, the electronic equipment at least comprises a first image acquisition device and a second image acquisition device;
the method comprises the following steps:
starting the first image acquisition device and the second image acquisition device;
shooting different target objects by respectively adopting the first image acquisition equipment and the second image acquisition equipment to obtain corresponding first images and second images;
and synthesizing the first image and the second image into a target image.
A2, the method of A1, the first image capturing device and the second image capturing device being located in different shooting planes respectively.
A3, the method of A2, wherein the first image capturing device is a front camera and the second image capturing device is a rear camera.
A4, the method of A1, A2 or A3, the step of composing the first image and the second image into a target image comprising:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
and adding the adjusted characteristic information to a specified area of the second image to generate a target image.
A5, the method according to a4, wherein the designated area is an area within the depth of field of the second image, the step of adding the adjusted feature information to the designated area of the second image is to:
and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
A6, where the method according to a5, before the step of adding the adjusted feature information as a new layer to an area within the depth of field of the second image and generating a target image, the method further includes:
and performing adjustment processing on the second image, wherein the adjustment processing comprises reducing the depth range of the second image.
A7, the method as in A5 or A6, wherein the characteristic information includes character characteristic information.
A8, the method as in a1, wherein the target objects include a first target object and a second target object, the capturing with the first image capturing device and the second image capturing device respectively aiming at different target objects, and the obtaining of the corresponding first image and second image includes:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
A9, the method as in A8, wherein the second image capturing device is a dual-camera device, and includes a first sub-camera and a second sub-camera located on the same shooting plane and adjacent to each other, and the step of capturing the second target object with the second image capturing device to obtain the second image includes:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
A10, the method of a9, wherein the third image is a color image and the fourth image is a black and white image.
A11, the method of A10, the first sub-camera comprising a color RGBW sensor, the second sub-camera comprising a black and white night vision sensor.
A12, the method of a1, further comprising:
and outputting the target image.
The invention also discloses B13, an electronic device for generating images, which at least comprises a first image acquisition device and a second image acquisition device;
the electronic device further includes:
the starting module is suitable for starting the first image acquisition device and the second image acquisition device;
the shooting module is suitable for respectively adopting the first image acquisition device and the second image acquisition device to shoot different target objects to obtain corresponding first images and second images;
and the synthesis module is suitable for synthesizing the first image and the second image into a target image.
B14, the electronic device as B13, the first image acquisition device and the second image acquisition device are respectively positioned in different shooting planes.
B15, the electronic device as in B14, the first image acquisition device is a front camera, and the second image acquisition device is a rear camera.
B16, an electronic device as described in B13 or B14 or B15, the synthesis module further adapted to:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
and adding the adjusted characteristic information to a specified area of the second image to generate a target image.
B17, the electronic device of B16, the synthesis module further adapted to:
and adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image.
B18, the electronic device of B17, further comprising:
and the adjusting processing module is suitable for adjusting the second image, and the adjusting processing comprises reducing the depth of field range of the second image.
B19, the electronic device as B17 or B18, the characteristic information includes character characteristic information.
B20, the electronic device of B13, the target objects including a first target object and a second target object, the photographing module further adapted to:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
B21, the electronic device according to B20, wherein the second image capturing device is a dual-camera device, and comprises a first sub-camera and a second sub-camera located on the same shooting plane and adjacent to each other, and the shooting module is further adapted to:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
B22, the electronic device according to B21, wherein the third image is a color image and the fourth image is a black-and-white image.
B23, the electronic device as in B22, the first sub-camera comprising a color RGBW sensor and the second sub-camera comprising a black and white night vision sensor.
B24, the electronic device of B13, further comprising:
and the output module is suitable for outputting the target image.

Claims (16)

1. An image generation method based on electronic equipment, wherein the electronic equipment at least comprises a first image acquisition device and a second image acquisition device;
the method comprises the following steps:
starting the first image acquisition equipment and the second image acquisition equipment, wherein the first image acquisition equipment is a front camera, the second image acquisition equipment is a rear camera, the second image acquisition equipment is double-camera equipment and comprises a first sub-camera and a second sub-camera which are positioned on the same shooting plane and adjacent to each other, and the first sub-camera originally supports two focusing modes of phase focusing and contrast focusing;
the step of activating the second image capturing device comprises:
if the current shooting environment is sufficient in light, only starting the first sub-camera;
if the night vision environment is met, the first sub-camera and the second sub-camera are started simultaneously;
shooting different target objects by respectively adopting the first image acquisition equipment and the second image acquisition equipment to obtain corresponding first images and second images;
synthesizing the first image and the second image into a target image;
the step of synthesizing the first image and the second image into a target image includes:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
adding the adjusted characteristic information to a designated area of the second image to generate a target image;
the specified area is an area within the depth range of the second image, the step of adding the adjusted feature information to the specified area of the second image to generate a target image comprises the following steps:
adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image;
before the step of adding the adjusted feature information as a new layer to an area within the depth of field range of the second image and generating a target image, the method further includes:
and performing adjustment processing on the second image, wherein the adjustment processing comprises reducing the depth range of the second image.
2. The method of claim 1, wherein the first image capture device and the second image capture device are each located in a different capture plane.
3. The method of claim 1, wherein the characteristic information comprises character characteristic information.
4. The method of claim 1, wherein the target object comprises a first target object and a second target object, and the step of taking the respective first image and the second image with the first image capturing device and the second image capturing device for the different target objects comprises:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
5. The method of claim 4, wherein the step of capturing the second target object with the second image capture device to obtain a second image comprises:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
6. The method of claim 5, wherein the third image is a color image and the fourth image is a black and white image.
7. The method of claim 6, wherein the first sub-camera comprises a color RGBW sensor and the second sub-camera comprises a black and white night vision sensor.
8. The method of claim 1, further comprising:
and outputting the target image.
9. An electronic device for generating an image at least comprises a first image acquisition device and a second image acquisition device, wherein the first image acquisition device is a front camera, and the second image acquisition device is a rear camera;
the electronic device further includes:
the starting module is suitable for starting the first image acquisition equipment and the second image acquisition equipment, the second image acquisition equipment is double-camera equipment and comprises a first sub-camera and a second sub-camera which are positioned on the same shooting plane and adjacent to each other, and the first sub-camera originally supports two focusing modes of phase focusing and contrast focusing;
the start module is further adapted to:
if the current shooting environment is sufficient in light, only starting the first sub-camera;
if the night vision environment is met, the first sub-camera and the second sub-camera are started simultaneously;
the shooting module is suitable for respectively adopting the first image acquisition device and the second image acquisition device to shoot different target objects to obtain a corresponding first image and a corresponding second image;
the synthesis module is suitable for synthesizing the first image and the second image into a target image;
the synthesis module is further adapted to:
identifying feature information in the first image;
extracting the characteristic information;
adjusting the characteristic information;
adding the adjusted characteristic information to a designated area of the second image to generate a target image;
the synthesis module is further adapted to:
adding the adjusted characteristic information serving as a new layer into an area within the depth range of the second image to generate a target image;
and the adjusting processing module is suitable for adjusting the second image, and the adjusting processing comprises reducing the depth of field range of the second image.
10. The electronic device of claim 9, wherein the first image capture device and the second image capture device are each located in a different capture plane.
11. The electronic device of claim 9, wherein the characteristic information includes character characteristic information.
12. The electronic device of claim 9, wherein the target object comprises a first target object and a second target object, the capture module further adapted to:
shooting the first target object by adopting the first image acquisition equipment to obtain a first image;
and shooting the second target object by adopting the second image acquisition equipment to obtain a second image.
13. The electronic device of claim 12, wherein the capture module is further adapted to:
respectively adopting the first sub-camera and the second sub-camera to synchronously shoot the second target object to obtain a corresponding third image and a corresponding fourth image;
and synthesizing the third image and the fourth image into the second image.
14. The electronic device of claim 13, wherein the third image is a color image and the fourth image is a black and white image.
15. The electronic device of claim 14, wherein the first sub-camera comprises a color RGBW sensor and the second sub-camera comprises a black and white night vision sensor.
16. The electronic device of claim 9, further comprising:
and the output module is suitable for outputting the target image.
CN201510918037.7A 2015-12-10 2015-12-10 Image generation method based on electronic equipment and electronic equipment Active CN106878606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510918037.7A CN106878606B (en) 2015-12-10 2015-12-10 Image generation method based on electronic equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510918037.7A CN106878606B (en) 2015-12-10 2015-12-10 Image generation method based on electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN106878606A CN106878606A (en) 2017-06-20
CN106878606B true CN106878606B (en) 2021-06-18

Family

ID=59177226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510918037.7A Active CN106878606B (en) 2015-12-10 2015-12-10 Image generation method based on electronic equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN106878606B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995420B (en) * 2017-11-30 2021-02-05 努比亚技术有限公司 Remote group photo control method, double-sided screen terminal and computer readable storage medium
CN112752030A (en) * 2019-10-30 2021-05-04 北京小米移动软件有限公司 Imaging method, imaging device, and storage medium
CN113572961A (en) * 2021-07-23 2021-10-29 维沃移动通信(杭州)有限公司 Shooting processing method and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101958976A (en) * 2010-09-29 2011-01-26 华为终端有限公司 Image processing method and wireless terminal equipment
CN103546730A (en) * 2012-07-11 2014-01-29 北京博雅华录视听技术研究院有限公司 Method for enhancing light sensitivities of images on basis of multiple cameras
CN203466898U (en) * 2013-08-16 2014-03-05 北京播思无线技术有限公司 360-degree panoramic shooting apparatus of hand-held terminal
CN103945045A (en) * 2013-01-21 2014-07-23 联想(北京)有限公司 Method and device for data processing
CN104349044A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Method and electronic equipment for shooting panoramic picture
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN104994260A (en) * 2015-06-30 2015-10-21 广东欧珀移动通信有限公司 Dual-camera mobile terminal
CN105049569A (en) * 2015-08-20 2015-11-11 擎亚国际贸易(上海)有限公司 Panoramic photo composition method based on front and back cameras of cellphone

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102028952B1 (en) * 2013-02-21 2019-10-08 삼성전자주식회사 Method for synthesizing images captured by portable terminal, machine-readable storage medium and portable terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101958976A (en) * 2010-09-29 2011-01-26 华为终端有限公司 Image processing method and wireless terminal equipment
CN103546730A (en) * 2012-07-11 2014-01-29 北京博雅华录视听技术研究院有限公司 Method for enhancing light sensitivities of images on basis of multiple cameras
CN103945045A (en) * 2013-01-21 2014-07-23 联想(北京)有限公司 Method and device for data processing
CN104349044A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Method and electronic equipment for shooting panoramic picture
CN203466898U (en) * 2013-08-16 2014-03-05 北京播思无线技术有限公司 360-degree panoramic shooting apparatus of hand-held terminal
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN104994260A (en) * 2015-06-30 2015-10-21 广东欧珀移动通信有限公司 Dual-camera mobile terminal
CN105049569A (en) * 2015-08-20 2015-11-11 擎亚国际贸易(上海)有限公司 Panoramic photo composition method based on front and back cameras of cellphone

Also Published As

Publication number Publication date
CN106878606A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107948519B (en) Image processing method, device and equipment
CN106878605B (en) Image generation method based on electronic equipment and electronic equipment
KR102306304B1 (en) Dual camera-based imaging method and device and storage medium
KR102279436B1 (en) Image processing methods, devices and devices
KR102266649B1 (en) Image processing method and device
CN108712608B (en) Terminal equipment shooting method and device
CN108154514B (en) Image processing method, device and equipment
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
CN110430370B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2017509259A (en) Imaging method for portable terminal and portable terminal
JP2017505004A (en) Image generation method and dual lens apparatus
CN110266954B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2011041089A (en) Method, device and program for processing image, and imaging device
CN108156369B (en) Image processing method and device
CN112991245B (en) Dual-shot blurring processing method, device, electronic equipment and readable storage medium
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN106506939B (en) Image acquisition device and acquisition method
CN111246093B (en) Image processing method, image processing device, storage medium and electronic equipment
KR20110109574A (en) Image processing method and photographing apparatus using the same
US20220329729A1 (en) Photographing method, storage medium and electronic device
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111246092B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106878606B (en) Image generation method based on electronic equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240112

Address after: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.