CN110072057B - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN110072057B
CN110072057B CN201910398350.0A CN201910398350A CN110072057B CN 110072057 B CN110072057 B CN 110072057B CN 201910398350 A CN201910398350 A CN 201910398350A CN 110072057 B CN110072057 B CN 110072057B
Authority
CN
China
Prior art keywords
image
images
shooting
camera
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910398350.0A
Other languages
Chinese (zh)
Other versions
CN110072057A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910398350.0A priority Critical patent/CN110072057B/en
Publication of CN110072057A publication Critical patent/CN110072057A/en
Application granted granted Critical
Publication of CN110072057B publication Critical patent/CN110072057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides an image processing method and a related product, which are applied to electronic equipment, wherein the electronic equipment comprises a main camera, an auxiliary camera and a camera slide rail, and the method comprises the following steps: the method comprises the steps of shooting a target object through a main camera to obtain a first image, moving along a camera slide rail through a secondary camera, continuously shooting in the moving process to obtain a plurality of second images, evaluating the image quality of the first image to obtain an image quality evaluation value, processing the plurality of second images to obtain a third image and pushing the third image when the image quality evaluation value is smaller than or equal to a preset threshold value, and therefore, a high-quality image can be quickly captured, and the capturing efficiency is improved.

Description

Image processing method and related product
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an image processing method and a related product.
Background
At present, along with the rapid development of electronic equipment camera functions such as cell-phone, the consumer rises gradually to the camera demand that possesses more powerful function, is provided with two types of cameras on mobile terminals such as cell-phone: leading camera and rear camera, the electronic equipment of 2 fixed rear camera has appeared on the market, but two rear camera when shooing, the shooting angle is limited, when taking a candid photograph and applying, take a candid photograph the failure, then need through shooing many times, just can realize taking a candid photograph, like this, some precious picture is missed probably, consequently, can not obtain the picture that the user wanted fast.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related product, and the snapshot efficiency can be improved.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, where the electronic device includes a main camera, a sub-camera, and a camera slide rail, and the camera slide rail is used for the sub-camera to move, and the method includes:
shooting a target object through the main camera to obtain a first image;
the auxiliary camera moves along the camera slide rail, and continuous photographing is carried out in the moving process to obtain a plurality of second images;
performing image quality evaluation on the first image to obtain an image quality evaluation value;
and when the image quality evaluation value is smaller than or equal to the preset threshold value, performing image processing on the plurality of second images to obtain a third image, and pushing the third image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to an electronic device, where the electronic device includes a main camera, a sub-camera, and a camera slide rail, and the camera slide rail is used for the sub-camera to move, and the image processing apparatus includes:
the shooting unit is used for shooting a target object through the main camera to obtain a first image;
the shooting unit is also used for moving along the camera slide rail through the auxiliary camera and continuously shooting in the moving process to obtain a plurality of second images;
the evaluation unit is used for evaluating the image quality of the first image to obtain an image quality evaluation value;
and the image processing unit is used for carrying out image processing on the plurality of second images to obtain a third image and pushing the third image when the image quality evaluation value is smaller than or equal to the preset threshold value.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that the image processing method and the related product described in the embodiments of the present application are applied to an electronic device, which includes a main camera, a sub-camera, and a camera slide rail, where the camera slide rail is used for moving the sub-camera, and the electronic device photographs a target object through the main camera to obtain a first image, and moves along the camera slide rail through the sub-camera, and photographs continuously during the movement to obtain a plurality of second images, and performs image quality evaluation on the first image to obtain an image quality evaluation value, and performs image processing on the plurality of second images to obtain a third image and pushes the third image when the image quality evaluation value is less than or equal to a preset threshold, so that the main camera and the sub-camera can be used for capturing at the same time, and when the main camera does not meet the requirements, the images photographed by the sub-camera can be used to synthesize a final captured image, the snapshot efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 1C is a schematic structural diagram of a camera of an electronic device according to an embodiment of the present disclosure;
fig. 1D is a schematic diagram of a position relationship between two adjacent images according to an embodiment of the present application;
fig. 1E is a schematic diagram of an image to be cut after two images are fused according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4A is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 4B is a modified structure of the image processing apparatus shown in fig. 4A according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wireless headsets, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have a wireless communication function, and the electronic device may be, for example, a smart phone, a tablet computer, a headset box, and the like. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device includes a control circuit and an input-output circuit, and the input-output circuit is connected to the control circuit.
The control circuitry may include, among other things, storage and processing circuitry. The storage circuit in the storage and processing circuit may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry may be used to control the operation of the electronic device. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry may be used to run software in the electronic device, such as play incoming call alert ringing application, play short message alert ringing application, play alarm alert ringing application, play media file application, Voice Over Internet Protocol (VOIP) phone call application, operating system functions, and so forth. The software may be used to perform some control operations, such as playing an incoming alert ring, playing a short message alert ring, playing an alarm alert ring, playing a media file, making a voice phone call, and performing other functions in the electronic device, and the embodiments of the present application are not limited.
The input-output circuit can be used for enabling the electronic device to input and output data, namely allowing the electronic device to receive data from the external device and allowing the electronic device to output data from the electronic device to the external device.
The input-output circuit may further include a sensor. The sensors may include ambient light sensors, optical and capacitive based infrared proximity sensors, ultrasonic sensors, touch sensors (e.g., optical based touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or may be used independently as a touch sensor structure), acceleration sensors, gravity sensors, and other sensors, etc. The input-output circuit may further include audio components that may be used to provide audio input and output functionality for the electronic device. The audio components may also include a tone generator and other components for generating and detecting sound.
The input-output circuitry may also include one or more display screens. The display screen can comprise one or a combination of a liquid crystal display screen, an organic light emitting diode display screen, an electronic ink display screen, a plasma display screen and a display screen using other display technologies. The display screen may include an array of touch sensors (i.e., the display screen may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The input-output circuitry may further include communications circuitry that may be used to provide the electronic device with the ability to communicate with external devices. The communication circuitry may include analog and digital input-output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in the communication circuitry may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless communication circuitry in the communication circuitry may include circuitry to support Near Field Communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit may include a near field communication antenna and a near field communication transceiver. The communications circuitry may also include cellular telephone transceiver and antennas, wireless local area network transceiver circuitry and antennas, and so forth.
The input-output circuit may further include other input-output units. Input-output units may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
The electronic device may further include a battery (not shown) for supplying power to the electronic device.
The following describes embodiments of the present application in detail.
Referring to fig. 1B, fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, and is applied to the electronic device described in fig. 1A, where the electronic device includes a main camera, a sub-camera, and a camera slide rail, and the camera slide rail is used for the sub-camera to move, and the image processing method may include the following steps:
101. and shooting a target object through the main camera to obtain a first image.
In this embodiment, the electronic device may include a main camera, an auxiliary camera, and a camera slide rail, where the camera slide rail may be used for the auxiliary camera to move, the shape of the camera slide rail may be set by a user or default to a system, and the position of the camera slide rail may be set by the user or default to the system, as shown in fig. 1C, the main camera, the auxiliary camera, and the camera slide rail are all located on a rear display screen of the electronic device, and both the main camera and the auxiliary camera may move along the camera slide rail according to a movement mode set by the user or default to the system, for example, the camera may slide or turn over in the camera slide rail, and the specific movement mode is not limited herein.
In a specific implementation, the main camera may be set by a user or default by a system, and the user may shoot the target object in a designated time zone and a designated zone, where the designated time zone may be set by the default of the system or the user, and the designated zone may also be set by the default of the system or the user.
Optionally, before the step 101, before the main camera captures the target object to obtain the first image, the method may further include the following steps:
a1, acquiring current environment parameters;
a2, obtaining historical shooting environment parameters;
a3, matching the current environment parameters with historical shooting environment parameters to obtain a plurality of matching degrees;
a4, selecting the maximum value of the matching degrees, and acquiring the environment parameter corresponding to the maximum value as a target environment parameter;
a5, determining target shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
and A6, shooting the target object according to the target shooting parameters to obtain a first image.
After the camera is turned on, the electronic device can acquire current environmental parameters through the environmental sensor, wherein the environmental parameters include at least one of the following: ambient light level, temperature, humidity, geographical location, magnetic field interference intensity, etc., without limitation; the environmental sensor may be at least one of: an ambient light sensor, a temperature sensor, a humidity sensor, a position sensor, a magnetic field detection sensor, etc., without limitation. The shooting parameters may include at least one of: shutter, aperture, sensitivity ISO, exposure EV value, shooting mode, and the like, which are not limited herein. In a specific implementation, after obtaining the current environmental parameters, historical shooting parameters can be obtained, the historical shooting environmental parameters can be understood as shooting parameters used by a user in a historical shooting environment, different shooting environments can correspond to different environmental parameters, for example, a night shooting environment can correspond to a set of environmental parameters, a light environment can also correspond to a set of environmental parameters, and the historical shooting environmental parameters can be defaulted by a system or set by the user; the current environmental parameters can be matched with the historical shooting environmental parameters, and a set of environmental parameters corresponding to the maximum matching degree is obtained and is used as target environmental parameters.
In addition, the electronic device may pre-store a mapping relationship between preset environmental parameters and shooting parameters, and may determine the target shooting parameters through the mapping relationship, for example, the corresponding shooting parameters may be preset for the environmental parameters in a night shooting environment, and finally the main camera may be controlled to shoot the target object according to the target shooting parameters to obtain the first image, so that the shooting parameters may be confirmed according to the environment, and an image that is satisfied by the user may be obtained as much as possible.
102. And moving the auxiliary camera along the camera slide rail, and continuously photographing in the moving process to obtain a plurality of second images.
The auxiliary camera can move along the camera slide rail according to preset movement parameters, the preset movement parameters can be set by system defaults or a user, limitation is not made, the auxiliary camera can continuously shoot a target object in the moving process, and therefore a plurality of second images are collected, and the target object can be a dynamic object or a static object and is not limited.
Optionally, in step 102, when the target object is a dynamic object, the moving along the camera slide rail by the secondary camera and continuously taking pictures during the moving process to obtain a plurality of second images may include the following steps:
b1, locking the position of the target object according to the first image;
b2, acquiring an initial shooting position according to the position of the target object;
b3, determining the relative movement speed of the target object according to the initial shooting position;
b4, determining target motion parameters of the secondary camera moving on the camera slide rail according to the relative motion speed;
and B5, moving the auxiliary camera according to the target motion parameters, and shooting the target object at preset time intervals in the moving process to obtain a plurality of second images.
The preset time interval may be set by a user or default by a system, and when the target object is a dynamic object, because the object is moving, a clear picture of the dynamic object is not easy to be obtained, so that a mapping relationship between a camera motion parameter and a relative motion speed of the object may be stored in the electronic device in advance, and the motion parameter may include at least one of the following: the moving direction, the moving speed, the moving displacement and the like are not limited, so that target moving parameters of the camera slide rail can be determined according to the mapping relation, finally, the camera is moved according to the target moving parameters, and a target object is shot at preset time intervals in the moving process to obtain a plurality of second images; in the concrete implementation: the method comprises the steps that a focus point or a focus area when a first image is shot by a main camera can be obtained, a target object is determined through the focus point or the focus area, and the position is locked, so that the target object can be positioned through the first image shot by the main camera; the position of the target object is the initial shooting position of the auxiliary camera.
The electronic device may track and focus the target object according to the initial shooting position, determine a pixel of the first image, calculate a pixel difference of the focused pixel of the target object relative to the first image according to the pixel of the first image, determine a relative motion speed v of the target object according to the pixel difference d and a main camera frame rate fps, where the pixel difference d and the main camera frame rate fps are positive numbers, and thus determine a target motion parameter of the sub-camera moving on the camera rail according to a mapping relationship between the pre-stored camera motion parameter and the object relative motion speed, and finally control the sub-camera to move on the camera rail according to the target motion parameter, and shoot the target object at predetermined time intervals to obtain a plurality of second images.
103. And evaluating the image quality of the first image to obtain an image quality evaluation value.
After obtaining the first image, the electronic device may perform image quality evaluation on the first image, and in a specific implementation, the image quality evaluation index may be used to perform image quality evaluation on the captured first image to obtain an image quality evaluation value, where the image quality evaluation index may include at least one of: the average gray, mean square error, entropy, edge preservation, signal-to-noise ratio, etc. may be defined as the larger the image quality evaluation value is, the better the image quality is, and thus, a high-quality image can be obtained.
Optionally, after the step 103, the following steps may be further included:
and when the image quality evaluation value is larger than a preset threshold value, pushing the first image and deleting the plurality of second images.
The preset threshold value can be set by default of a system or by a user, and is not limited herein, when the image quality evaluation value is greater than the preset threshold value, that is, under the condition that the image quality of the first image meets the standard, the electronic device can push the first image to the user and delete the plurality of second images, so that the storage space can be saved, the main camera can be used for capturing, and when the main camera captures the image meeting the requirements, the image captured by the main camera can be directly used as the captured image.
104. And when the image quality evaluation value is smaller than or equal to the preset threshold value, performing image processing on the plurality of second images to obtain a third image, and pushing the third image.
The preset threshold may be set by default of a system or by a user, and is not limited herein, when the electronic device detects that the image quality evaluation value is less than or equal to the preset threshold, it indicates that the image quality of the first image is poor or does not reach the standard, and at this time, the electronic device may perform image processing on the plurality of second images to obtain a third image, and push the third image to the user.
Optionally, in the step 104, when the target object is a dynamic object, the performing image processing on the plurality of second images to obtain a third image may include the following steps:
c1, performing foreground detection on the plurality of second images to obtain a plurality of foreground images, wherein each foreground image comprises a target object;
c2, acquiring a background image of the first image;
and C3, performing dynamic thumbnail generation operation according to the plurality of foreground images and the background image to obtain a third image.
If the target object is a dynamic object, the foreground image may be understood as an image including the target object, and the background image may be understood as an image not including the target object, for example, if the second image is a bird in the sky, the image including the bird may be understood as a foreground image, and a cloud in the sky or other scenes such as an airplane may be understood as a background image; in a specific implementation, foreground detection may be performed on a plurality of second images to obtain a plurality of foreground images including a target object, where the plurality of foreground images may include a motion of a dynamic object, and a motion pose included in each foreground image may be different, then, if the first image includes the target object, framing of the target object may be performed on the first image, modeling may be performed on the foreground image and the background image, each pixel in the first image may be connected to a node of the foreground image or the background image, if two adjacent nodes do not belong to the same foreground image or the background image, a connection line between the two nodes may be cut off, thereby distinguishing the foreground image and the background image of the first image, if the first image does not include the target object, the first image may be used as the background image, and finally, the plurality of second images and the background image can be subjected to dynamic thumbnail generation operation to obtain a third image, the dynamic thumbnail can be understood as a dynamically playable GIF (graphic interchange format) image, and therefore the generated dynamic thumbnail can restore the motion posture of the dynamic object to the maximum extent and improve user experience.
The foreground detection method can comprise at least one of the following steps: a single gaussian Model (single gaussian), a mixed gaussian Model (mixtureof gaussian Model), or a Self-organizing background detection (SOBS-Self-organizing background sub), etc., which are not limited herein.
Optionally, in the step 104, when the target object is a static object, the performing image processing on the plurality of second images to obtain a third image may include the following steps:
d1, extracting feature points of each second image in the plurality of second images to obtain a plurality of feature point sets, wherein each second image corresponds to one feature point set;
d2, matching any two adjacent feature point sets in the plurality of feature point sets according to the shooting time sequence to obtain a plurality of matching values;
d3, selecting a matching value larger than a first preset threshold value from the multiple matching values to obtain at least one target matching value;
d4, determining a second image corresponding to the at least one target matching value to obtain a plurality of target second images;
d5, carrying out image splicing on the plurality of target second images to obtain a third image.
In a specific implementation, the feature points may be extracted from the plurality of second images to obtain a plurality of feature point sets, each of the second images may correspond to one feature point set, so that the points with strong features in the second images may be obtained, when the target object is a static object, the images in the plurality of second images may have an overlapping region, after the feature point extraction is performed on the plurality of second images, any two adjacent feature point sets in the plurality of feature point sets corresponding to the plurality of second images may be matched according to the shooting time sequence of the plurality of second images, so as to obtain a plurality of matching values, which may be understood as larger matching values, the higher the relevance of the feature points in the two feature point sets is, the higher the similarity of the images is, the first preset threshold value is available, if the matching value exceeds the first preset threshold value, it is indicated that a repeated region exists between the two second images corresponding to the matching value, the matching value larger than the first preset threshold value can be selected from the multiple matching values to obtain at least one target matching value, the second image corresponding to the at least one target matching value is determined to obtain multiple target second images, the multiple target second images can be regarded as images with repeated regions between adjacent images, finally, the multiple target second images can be subjected to image stitching processing to obtain images of repeated regions, the images corresponding to the matching values smaller than or equal to the first preset threshold value in the multiple matching values are selected to obtain at least one image of non-repeated regions, and the images of the non-repeated regions are subjected to image stitching, a third image may be obtained which may be a panorama within the range of the secondary camera.
The method for extracting the feature points can comprise at least one of the following steps: accelerated Up Robust Feature method (SURF), Scale Invariant Feature Transform (SIFT), Accelerated segmentation Test derived Feature method (FAST), Harris corner method, local non-deformed Feature point Feature extraction method (ORB), etc., which are not limited herein.
Optionally, in the step D5, the image stitching the plurality of target second images to obtain a third image may include the following steps:
e1, selecting any one image of every two adjacent target second images in the plurality of target second images as a reference image, and selecting the other one as an image to be spliced to obtain a plurality of pairs of images, wherein each pair of images comprises the reference image and the image to be spliced;
e2, respectively acquiring feature points in a feature point set corresponding to each pair of reference images and the images to be stitched and corresponding to matching values larger than a second preset threshold value, wherein each pair of reference images and the images to be stitched correspond to one feature point pair;
e3, extracting the image features corresponding to the feature point pairs;
e4, determining the position relation between each pair of reference images in the plurality of pairs of images and the image to be spliced based on the characteristic point pairs to obtain a plurality of position relations;
e5, fusing the images corresponding to the image features of each pair of images to be spliced and the reference images one by one according to the position relation between each pair of reference images and the images to be spliced on the basis of the coordinate system of the reference images in each pair of images to obtain a plurality of images to be cut;
e6, cutting the multiple images to be cut according to a preset image cutting mode to obtain multiple repeated area images;
e7, selecting an image corresponding to the matching value smaller than or equal to a first preset threshold value in the matching values to obtain at least one non-repetitive area image;
and E8, splicing the at least one non-repeated area image and the at least one repeated area image according to the shooting time sequence to obtain a third image.
Wherein, possibly because the positions of the secondary cameras for shooting the target object are different, after a plurality of target second images are obtained, the forms or positions of the target object in the displayed image frames may also be different, so that the plurality of target second images can be subjected to re-stitching and cropping, and thus images of a plurality of repeated regions can be obtained, a preset image cropping mode can be set by default in a system or by the user, and is not limited herein, and a second preset threshold value can be set by default in the system or by the user, and is not limited herein, in a specific implementation, any one image of every two adjacent target second images in the plurality of target second images can be defined as a reference image, and the other image can be defined as an image to be stitched, so that a plurality of pairs of images can be obtained, each pair of images can include a reference image and an image to be stitched, and the specific definition mode is not limited herein, then, for each reference image in the plurality of pairs of images, feature points in the feature point set corresponding to the matching value greater than the second preset threshold may be extracted, so as to obtain feature point pairs, where an image corresponding to each pair of feature point pairs is an image in which the reference image and the image to be stitched are repeated.
In addition, for each pair of reference images and images to be stitched, image features corresponding to feature point pairs may be extracted, where the image features may be understood as image pixels in a certain range including the feature point pairs between the reference images and the images to be stitched or image pixels around the feature points, and may also be understood as image pixels corresponding to repeated regions of two images, and the method for extracting the image features corresponding to the feature point pairs may include at least one of the following: histogram of Oriented Gradients (HOG) features, Local Binary Pattern (LBP) features, and the like, without limitation. Then, based on each pair of feature points, determining the position relationship between each pair of reference images in the plurality of pairs of images and the images to be stitched to obtain a plurality of position relationships, and then, based on the coordinate system of the reference images in each pair of images, fusing the images corresponding to the image features of each pair of images to be stitched and the images corresponding to the image features of the reference images one by one according to the position relationship between each pair of reference images and the images to be stitched to obtain a plurality of images to be cropped.
Furthermore, after the secondary camera moves along the camera slide rail to take continuous pictures, a plurality of second images can be obtained on the same horizontal plane, however, the shooting angles may be different, so that after the image to be cut is obtained, the image can be cut according to a preset image cutting mode, cutting the image to be cut to obtain multiple repeated area images under the same angle, and cutting the image to be cut to obtain multiple repeated area images under the same angle, the images corresponding to the matching values smaller than or equal to the first preset threshold among the matching values can be selected to obtain at least one image of the non-repetitive area, and finally the at least one image of the non-repetitive area and the at least one image of the repetitive area can be spliced one by one according to the shooting time sequence to obtain a panoramic image of the target object, namely a third image, so that the panoramic image shot by the secondary camera for the target object can be obtained.
For example, as shown in fig. 1D, a schematic diagram of a positional relationship between two adjacent images is shown, where the images 1 and 2 are two images shot by a sub-camera, the image abc in the image 1 and the image a 'b' c 'in the image 2 are image regions where the image 1 and the image 2 are repeated, and as can be seen from the diagram, compared with the image 1, the repeated image a' b 'c' in the image 2 is different from the image abc in the image 1, so before performing image fusion, images of the repeated regions of the image 2 and the image 1 can be subjected to image fusion according to the positional relationship between the image 1 and the image 2, and considering that a gap at a boundary between the two images may be obvious due to different image brightness and the like, pixel values of the two repeated regions of the image can be obtained, and the pixel values are added according to a certain weight to synthesize a new image, obtaining an image to be cut, which may present a polygonal shape, as shown in fig. 1E, which is a schematic diagram of the image to be cut after two images are fused, as shown in the figure, after image fusion is performed on image 2 and image 1, image 2 and a repeated region in image 1 are fused, so as to obtain a polygonal image to be cut, and thus, according to a preset image cutting mode, the image to be cut may be cut, and a new image in the preset image cutting mode is obtained, which is a third image.
In a possible embodiment, after the step 104, the following steps may be further included:
f1, placing the third image in a preset view frame;
f2, acquiring the final image selected by the user in the preset view frame as the target image.
The preset view-finding frame can be default to a system, or the preset view-finding frame is set by a user, the shape, the size and the like of the view-finding frame are not specifically limited, the third image can be placed in the preset view-finding frame after the third image is pushed to the user by the electronic equipment, the electronic equipment can acquire a final image which is dragged by the user into the preset view-finding frame, and the final image is a target image selected by the user.
It can be seen that in the image processing method described in this embodiment of the application, a main camera is used to shoot a target object to obtain a first image, a sub camera is used to move along a camera slide rail, and in the moving process, continuous shooting is performed to obtain a plurality of second images, image quality evaluation is performed on the first image to obtain an image quality evaluation value, when the image quality evaluation value is less than or equal to a preset threshold, image processing is performed on the plurality of second images to obtain a third image, and the third image is pushed.
Consistent with the above, fig. 2 is a schematic flowchart of an image processing method provided in an embodiment of the present application. The image processing method is applied to the electronic equipment shown in FIG. 1A, the electronic equipment comprises a main camera, a secondary camera and a camera slide rail, the camera slide rail is used for the secondary camera to move, and the image processing method comprises the following steps:
201. and acquiring current environment parameters.
202. And acquiring historical shooting environment parameters.
203. And matching the current environment parameters with historical shooting environment parameters to obtain a plurality of matching degrees.
204. And selecting the maximum value in the matching degrees, and acquiring the environmental parameter corresponding to the maximum value as a target environmental parameter.
205. And determining target shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters.
206. And shooting the target object according to the target shooting parameters to obtain a first image.
207. And moving the auxiliary camera along the camera slide rail, and continuously photographing in the moving process to obtain a plurality of second images.
208. And evaluating the image quality of the first image to obtain an image quality evaluation value.
209. And when the image quality evaluation value is smaller than or equal to the preset threshold value, performing image processing on the plurality of second images to obtain a third image, and pushing the third image.
The detailed description of the steps 201 to 209 may refer to the corresponding description of the image processing method described in fig. 1B, and is not repeated herein.
It can be seen that, in the image processing method described in the embodiment of the present application, an electronic device obtains a current environment parameter, obtains a historical shooting environment parameter, matches the current environment parameter with the historical shooting environment parameter to obtain a plurality of matching degrees, selects a maximum value of the plurality of matching degrees, obtains an environment parameter corresponding to the maximum value as a target environment parameter, determines a target shooting parameter corresponding to the target environment parameter according to a mapping relationship between the preset environment parameter and the shooting parameter, shoots a target object according to the target shooting parameter to obtain a first image, moves along a camera slide rail through a secondary camera, continuously shoots during the movement to obtain a plurality of second images, evaluates image quality of the first image to obtain an image quality evaluation value, and when the image quality evaluation value is less than or equal to a preset threshold value, the plurality of second images are subjected to image processing to obtain a third image, the third image is pushed, so that in order to shoot an image which is more environment-friendly, shooting parameters of the camera can be adjusted according to changes of the environment where the electronic equipment is located, then after the target object is shot, image quality evaluation is carried out, a higher-quality image can be obtained, and user experience is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, a main camera, a sub-camera, a camera slide rail, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
shooting a target object through the main camera to obtain a first image;
the auxiliary camera moves along the camera slide rail, and continuous photographing is carried out in the moving process to obtain a plurality of second images;
performing image quality evaluation on the first image to obtain an image quality evaluation value;
and when the image quality evaluation value is smaller than or equal to the preset threshold value, performing image processing on the plurality of second images to obtain a third image, and pushing the third image.
In a possible embodiment, when the target object is a dynamic object, the sub-camera moves along the camera slide rail, and continuously takes pictures during the movement to obtain a plurality of second images, where the program includes instructions for performing the following steps:
locking the position of the target object according to the first image;
acquiring an initial shooting position according to the position of the target object;
determining the relative movement speed of the target object according to the initial shooting position;
determining target motion parameters of the auxiliary camera moving on the camera slide rail according to the relative motion speed;
and moving the auxiliary camera according to the target motion parameters, and shooting the target object at preset time intervals in the moving process to obtain a plurality of second images.
In a possible embodiment, when the target object is a dynamic object, in terms of performing image processing on the plurality of second images to obtain a third image, the program further includes instructions for performing the following steps:
performing foreground detection on the plurality of second images to obtain a plurality of foreground images, wherein each foreground image comprises a target object;
acquiring a background image of the first image;
and performing dynamic thumbnail generation operation according to the plurality of foreground images and the background image to obtain a third image.
In a possible embodiment, when the target object is a static object, in the aspect of performing image processing on the plurality of second images to obtain a third image, the program further includes instructions for performing the following steps:
extracting feature points of each second image in the plurality of second images to obtain a plurality of feature point sets, wherein each second image corresponds to one feature point set;
according to the shooting time sequence, matching any two adjacent feature point sets in the plurality of feature point sets to obtain a plurality of matching values;
selecting a matching value larger than a first preset threshold value from the multiple matching values to obtain at least one target matching value;
determining a second image corresponding to the at least one target matching value to obtain a plurality of target second images;
and carrying out image splicing on the plurality of target second images to obtain a third image.
In a possible embodiment, before said capturing of the target object by said main camera, resulting in the first image, the program comprises instructions for:
acquiring current environmental parameters;
acquiring historical shooting environment parameters;
matching the current environment parameters with historical shooting environment parameters to obtain a plurality of matching degrees;
selecting the maximum value of the matching degrees, and acquiring the environmental parameter corresponding to the maximum value as a target environmental parameter;
determining target shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
and shooting the target object according to the target shooting parameters to obtain a first image.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 4A, fig. 4A is a schematic structural diagram of an image processing apparatus disclosed in an embodiment of the present application, and is applied to the electronic device shown in fig. 1A, where the electronic device includes a main camera, a sub-camera, and a camera slide rail, the camera slide rail is used for the sub-camera to move, the image processing apparatus includes a shooting unit 401, an evaluation unit 402, and an image processing unit 403, where,
the shooting unit 401 is configured to shoot a target object through the main camera to obtain a first image;
the shooting unit 401 is further configured to move along the camera slide rail through the secondary camera, and continuously shoot in the moving process to obtain a plurality of second images;
the evaluation unit 402 is configured to perform image quality evaluation on the first image to obtain an image quality evaluation value;
the image processing unit 403 is configured to, when the image quality evaluation value is less than or equal to the preset threshold, perform image processing on the plurality of second images to obtain a third image, and push the third image.
Optionally, when the target object is a dynamic object, in terms of performing continuous photographing by the secondary camera along the camera slide rail and during the movement process to obtain a plurality of second images, the photographing unit 402 is specifically configured to:
locking the position of the target object according to the first image;
acquiring an initial shooting position according to the position of the target object;
determining the relative movement speed of the target object according to the initial shooting position;
determining target motion parameters of the auxiliary camera moving on the camera slide rail according to the relative motion speed;
and moving the auxiliary camera according to the target motion parameters, and shooting the target object at preset time intervals in the moving process to obtain a plurality of second images.
It can be seen that the image processing apparatus described in the embodiment of the present application captures a target object through a main camera to obtain a first image, the auxiliary camera moves along the camera slide rail and takes pictures continuously in the moving process to obtain a plurality of second images, performing image quality evaluation on the first image to obtain an image quality evaluation value, and when the image quality evaluation value is less than or equal to a preset threshold value, the plurality of second images are processed to obtain a third image, and the third image is pushed, so that the main camera and the auxiliary camera can be used for capturing at the same time, when the shooting of the main camera meets the requirements, the image shot by the main camera is directly used as a snapshot image, when the main camera shoots and does not meet the requirements, the images shot by the auxiliary camera can be utilized to synthesize the final snapshot image, and the snapshot efficiency can be improved.
Referring to fig. 4B, fig. 4B is a modified structure of the image processing apparatus shown in fig. 4A provided in the embodiment of the present application, and is applied to the electronic device shown in fig. 1A, where the electronic device includes a main camera, a sub-camera, and a camera slide rail, the camera slide rail is used for the sub-camera to move, the image processing apparatus further includes an obtaining unit 404, a matching unit 405, a selecting unit 406, and a determining unit 407,
an obtaining unit 404, configured to obtain a current environment parameter;
the obtaining unit 404 is further configured to obtain historical shooting environment parameters;
a matching unit 405, configured to match the current environment parameter with a historical shooting environment parameter to obtain multiple matching degrees;
a selecting unit 406, configured to select a maximum value of the multiple matching degrees, and obtain an environmental parameter corresponding to the maximum value as a target environmental parameter;
a determining unit 407, configured to determine a target shooting parameter corresponding to the target environment parameter according to a mapping relationship between preset environment parameters and shooting parameters;
the shooting unit 401 is further configured to shoot the target object according to the target shooting parameter, so as to obtain a first image.
It should be noted that the electronic device described in the embodiments of the present application is presented in the form of a functional unit. The term "unit" as used herein is to be understood in its broadest possible sense, and objects used to implement the functions described by the respective "unit" may be, for example, an integrated circuit ASIC, a single circuit, a processor (shared, dedicated, or chipset) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Among them, the photographing unit 401, the evaluation unit 402, the image processing unit 403, the acquisition unit 404, the matching unit 405, the selection unit 406, and the determination unit 407 may be a control circuit or a processor.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute a part or all of the steps of any one of the image processing methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the image processing methods as set forth in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method is applied to an electronic device, the electronic device comprises a main camera, a secondary camera and a camera slide rail, and the camera slide rail is used for the secondary camera to move, and the method comprises the following steps:
shooting a target object through the main camera to obtain a first image;
the auxiliary camera moves along the camera slide rail, and continuous photographing is carried out in the moving process to obtain a plurality of second images;
performing image quality evaluation on the first image to obtain an image quality evaluation value;
and when the image quality evaluation value is smaller than or equal to a preset threshold value, performing image processing on the plurality of second images to obtain a third image, and pushing the third image.
2. The method according to claim 1, wherein when the target object is a dynamic object, the sub-camera moves along the camera slide rail and continuously takes pictures during the movement to obtain a plurality of second images, and the method comprises:
locking the position of the target object according to the first image;
acquiring an initial shooting position according to the position of the target object;
determining the relative movement speed of the target object according to the initial shooting position;
determining target motion parameters of the auxiliary camera moving on the camera slide rail according to the relative motion speed;
and moving the auxiliary camera according to the target motion parameters, and shooting the target object at preset time intervals in the moving process to obtain a plurality of second images.
3. The method according to claim 1 or 2, wherein when the target object is a dynamic object, the performing image processing on the plurality of second images to obtain a third image includes:
performing foreground detection on the plurality of second images to obtain a plurality of foreground images, wherein each foreground image comprises a target object;
acquiring a background image of the first image;
and performing dynamic thumbnail generation operation according to the plurality of foreground images and the background image to obtain a third image.
4. The method according to claim 1 or 2, wherein when the target object is a static object, the performing image processing on the plurality of second images to obtain a third image comprises:
extracting feature points of each second image in the plurality of second images to obtain a plurality of feature point sets, wherein each second image corresponds to one feature point set;
according to the shooting time sequence, matching any two adjacent feature point sets in the plurality of feature point sets to obtain a plurality of matching values;
selecting a matching value larger than a first preset threshold value from the multiple matching values to obtain at least one target matching value;
determining a second image corresponding to the at least one target matching value to obtain a plurality of target second images;
and carrying out image splicing on the plurality of target second images to obtain a third image.
5. The method of claim 1, wherein before the capturing of the target object by the main camera to obtain the first image, the method further comprises:
acquiring current environmental parameters;
acquiring historical shooting environment parameters;
matching the current environment parameters with historical shooting environment parameters to obtain a plurality of matching degrees;
selecting the maximum value of the matching degrees, and acquiring the environmental parameter corresponding to the maximum value as a target environmental parameter;
determining target shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
and shooting the target object according to the target shooting parameters to obtain a first image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the shooting unit is used for shooting a target object through the main camera to obtain a first image;
the shooting unit is also used for moving along the camera slide rail through the auxiliary camera and continuously shooting in the moving process to obtain a plurality of second images;
the evaluation unit is used for evaluating the image quality of the first image to obtain an image quality evaluation value;
and the image processing unit is used for carrying out image processing on the plurality of second images to obtain a third image and pushing the third image when the image quality evaluation value is smaller than or equal to a preset threshold value.
7. The apparatus according to claim 6, wherein when the target object is a dynamic object, in the aspect that the sub-camera moves along the camera slide rail and continuously takes pictures during the movement to obtain a plurality of second images, the shooting unit is specifically configured to:
locking the position of the target object according to the first image;
acquiring an initial shooting position according to the position of the target object;
determining the relative movement speed of the target object according to the initial shooting position;
determining target motion parameters of the auxiliary camera moving on the camera slide rail according to the relative motion speed;
and moving the auxiliary camera according to the target motion parameters, and shooting the target object at preset time intervals in the moving process to obtain a plurality of second images.
8. The apparatus of claim 6 or 7, further comprising:
the acquisition unit is used for acquiring current environment parameters;
the acquisition unit is also used for acquiring historical shooting environment parameters;
the matching unit is used for matching the current environment parameters with the historical shooting environment parameters to obtain a plurality of matching degrees;
the selecting unit is used for selecting the maximum value in the matching degrees and acquiring the environmental parameter corresponding to the maximum value as a target environmental parameter;
the determining unit is used for determining target shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
the shooting unit is further used for shooting the target object according to the target shooting parameters to obtain a first image.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN201910398350.0A 2019-05-14 2019-05-14 Image processing method and related product Active CN110072057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910398350.0A CN110072057B (en) 2019-05-14 2019-05-14 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910398350.0A CN110072057B (en) 2019-05-14 2019-05-14 Image processing method and related product

Publications (2)

Publication Number Publication Date
CN110072057A CN110072057A (en) 2019-07-30
CN110072057B true CN110072057B (en) 2021-03-09

Family

ID=67370723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910398350.0A Active CN110072057B (en) 2019-05-14 2019-05-14 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN110072057B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749722B (en) * 2019-10-31 2023-11-17 深圳云天励飞技术有限公司 Model distribution management method and related products thereof
CN110913128B (en) * 2019-11-11 2021-04-23 苏州浩哥文化传播有限公司 Multi-azimuth intelligent control method for stage camera device
CN114095641A (en) * 2020-07-21 2022-02-25 珠海格力电器股份有限公司 Image display method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104320508A (en) * 2014-10-27 2015-01-28 广东欧珀移动通信有限公司 Mobile terminal with rotation camera and camera angle control method thereof
CN108449543A (en) * 2018-03-14 2018-08-24 广东欧珀移动通信有限公司 Image synthetic method, device, computer storage media and electronic equipment
CN108965710A (en) * 2018-07-26 2018-12-07 努比亚技术有限公司 Method, photo taking, device and computer readable storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6302679B2 (en) * 2014-01-20 2018-03-28 株式会社 日立産業制御ソリューションズ IMAGING DEVICE, IMAGING METHOD, FOCUS CONTROL DEVICE
CN106559664A (en) * 2015-09-30 2017-04-05 成都理想境界科技有限公司 The filming apparatus and equipment of three-dimensional panoramic image
CN106921829A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 A kind of photographic method and device and photographing device
CN106527943A (en) * 2016-11-04 2017-03-22 上海传英信息技术有限公司 Camera switching method and mobile terminal
CN106506958B (en) * 2016-11-15 2020-04-10 维沃移动通信有限公司 Method for shooting by adopting mobile terminal and mobile terminal
CN107613212A (en) * 2017-10-23 2018-01-19 青岛海信移动通信技术股份有限公司 Mobile terminal and its image pickup method
CN108419019A (en) * 2018-05-08 2018-08-17 Oppo广东移动通信有限公司 It takes pictures reminding method, device, storage medium and mobile terminal
CN108769478B (en) * 2018-06-08 2021-01-15 Oppo广东移动通信有限公司 Control method of sliding assembly, control assembly and electronic equipment
CN208797995U (en) * 2018-08-31 2019-04-26 信利光电股份有限公司 A kind of camera shooting assembly suitable for comprehensive screen mobile phone
CN109257528A (en) * 2018-10-11 2019-01-22 信利光电股份有限公司 A kind of adjustable more photographic devices of FOV and camera shooting terminal
CN109218593A (en) * 2018-11-20 2019-01-15 信宜市华联高科电子科技有限公司 A kind of mobile phone camera automatically snapping panoramic picture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104320508A (en) * 2014-10-27 2015-01-28 广东欧珀移动通信有限公司 Mobile terminal with rotation camera and camera angle control method thereof
CN108449543A (en) * 2018-03-14 2018-08-24 广东欧珀移动通信有限公司 Image synthetic method, device, computer storage media and electronic equipment
CN108965710A (en) * 2018-07-26 2018-12-07 努比亚技术有限公司 Method, photo taking, device and computer readable storage medium

Also Published As

Publication number Publication date
CN110072057A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110087123B (en) Video file production method, device, equipment and readable storage medium
EP3531689B1 (en) Optical imaging method and apparatus
CN109816663B (en) Image processing method, device and equipment
CN110139033B (en) Photographing control method and related product
CN110059685B (en) Character area detection method, device and storage medium
KR101725533B1 (en) Method and terminal for acquiring panoramic image
CN110113515B (en) Photographing control method and related product
CN109002787B (en) Image processing method and device, storage medium and electronic equipment
CN110072057B (en) Image processing method and related product
CN104333701A (en) Method and device for displaying camera preview pictures as well as terminal
CN104361558B (en) Image processing method, device and equipment
CN108776822B (en) Target area detection method, device, terminal and storage medium
CN111836052B (en) Image compression method, image compression device, electronic equipment and storage medium
CN110876036B (en) Video generation method and related device
CN112116624A (en) Image processing method and electronic equipment
CN110807769B (en) Image display control method and device
CN108616687A (en) A kind of photographic method, device and mobile terminal
CN106713656B (en) Shooting method and mobile terminal
CN109561255B (en) Terminal photographing method and device and storage medium
CN110266942B (en) Picture synthesis method and related product
CN112990197A (en) License plate recognition method and device, electronic equipment and storage medium
CN110545385A (en) image processing method and terminal equipment
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN112468722B (en) Shooting method, device, equipment and storage medium
CN110233966B (en) Image generation method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant