WO2022252158A1 - 拍照方法、移动终端及可读存储介质 - Google Patents

拍照方法、移动终端及可读存储介质 Download PDF

Info

Publication number
WO2022252158A1
WO2022252158A1 PCT/CN2021/097994 CN2021097994W WO2022252158A1 WO 2022252158 A1 WO2022252158 A1 WO 2022252158A1 CN 2021097994 W CN2021097994 W CN 2021097994W WO 2022252158 A1 WO2022252158 A1 WO 2022252158A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
target
cameras
target object
target image
Prior art date
Application number
PCT/CN2021/097994
Other languages
English (en)
French (fr)
Inventor
彭叶斌
蓝建梁
高敬轩
Original Assignee
深圳传音控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳传音控股股份有限公司 filed Critical 深圳传音控股股份有限公司
Priority to PCT/CN2021/097994 priority Critical patent/WO2022252158A1/zh
Priority to CN202180096946.0A priority patent/CN117157989A/zh
Publication of WO2022252158A1 publication Critical patent/WO2022252158A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present application relates to the technical field of communication, and in particular to a photographing method, a mobile terminal and a readable storage medium.
  • the inventor found at least the following problems: in the way of using one of the cameras to output images, the camera may be based on some influencing factors of the camera in the process of collecting images, resulting in unclear images captured instantaneously. It is necessary to repeat the shooting many times, which will affect the photo effect.
  • the present application provides a photographing method, a mobile terminal and a readable storage medium, so that the photographing effect is improved.
  • this application provides a method for taking pictures, which is applied to a mobile terminal, including:
  • the mobile terminal to which the method is applied includes a folding screen, the folding screen has at least two foldable screens, and at least two of the cameras are respectively arranged on the two screens.
  • the feature information of the image data includes at least one of the number of target objects, the position of the target object, the definition of the target object, the brightness of the target object, and the image resolution.
  • the method of comparing the characteristic information of the image data collected by all the cameras to determine the target image data whose characteristic information is better than the characteristic information of the image data collected by other cameras includes at least one of the following:
  • the image resolutions of the image data of all the cameras are compared, and the image data with the highest image resolution is determined as the target image data.
  • the step of comparing the feature information of the image data collected by all the cameras to determine the target image data whose feature information is better than the feature information of the image data collected by other cameras includes:
  • the feature information includes the number of target objects
  • the feature information may also include display feature, optionally, the display feature includes at least one of the display position of the target object, the display resolution of the target object, the display brightness of the target object, and the image resolution.
  • the step of comparing the characteristic information of the image data collected by all the cameras to determine the target image data whose characteristic information is better than the characteristic information of the image data collected by other cameras may further include:
  • the target image data is updated by using image data synthesized from the other image data and the target image data.
  • the step of updating the target image data by using the image data synthesized by the other image data and the target image data includes:
  • the spliced image data is used as the target image data.
  • the step of stitching the image data of the non-overlapping area to the stitching position corresponding to the target image data includes:
  • the image feature includes at least one of texture and/or color, and the similarity is determined based on at least one of texture difference and/or color difference.
  • the step of displaying the target image data on the preview interface of the mobile terminal may further include:
  • the adjustment prompt information includes prompt information for rotating the folding angle of the folding screen of the mobile terminal, and/or prompt information for adjusting the position of the mobile terminal.
  • the present application also provides a photographing method, the photographing method comprising:
  • the step of generating the target image data according to the image data collected by all the cameras includes:
  • Target image data is generated based on the adjusted image data.
  • the step of generating target image data based on the adjusted image data includes:
  • the target image data is generated based on the processed image data.
  • the present application also provides a mobile terminal, including: a memory and a processor, wherein a photographing program is stored in the memory, and when the photographing program is executed by the processor, the steps of any one of the above methods are implemented.
  • the present application also provides a computer storage medium, where the computer storage medium stores a computer program, and when the computer program is executed by a processor, the steps of any one of the above methods are implemented.
  • the photographing method of the present application is applied to a mobile terminal.
  • at least two of the cameras are controlled to collect image data; then based on comparing the feature information of the image data collected by the cameras, it is determined that the feature information is better
  • the target image data of the feature information of the image data collected by other cameras displaying the target image data on the preview interface of the mobile terminal.
  • this application can avoid the unclear image collected by one camera, which will lead to the shooting effect of this time. In bad situations, improve the camera effect of the mobile terminal.
  • FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application
  • FIG. 2 is a system architecture diagram of a communication network provided by an embodiment of the present application.
  • Fig. 3 is a schematic flowchart of a photographing method according to the first embodiment
  • Fig. 4 is a schematic structural diagram of an embodiment of a mobile terminal shown according to the first embodiment
  • Fig. 5 is a schematic structural diagram of another embodiment of a mobile terminal according to the first embodiment
  • Fig. 6 is a schematic structural diagram of another embodiment of the mobile terminal shown according to the first embodiment
  • FIG. 7 is a schematic diagram of step S20 of the photographing method according to the second embodiment, an embodiment of a refined flow
  • FIG. 8 is a schematic diagram of an embodiment of a detailed process of step S26 of the photographing method shown in the third embodiment
  • Fig. 9 is a schematic flowchart of a photographing method according to a fourth embodiment.
  • Fig. 10 is a schematic flowchart of a photographing method according to a fifth embodiment.
  • first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • second information may also be called first information.
  • the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination”.
  • the singular forms "a”, “an” and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
  • A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C”. Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (a stated condition or event)”.
  • step codes such as S10 and S20 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order.
  • S20 will be executed first, followed by S10, etc., but these should be within the scope of protection of this application.
  • Mobile terminals may be implemented in various forms.
  • the mobile terminals described in this application may include mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, Mobile terminals such as wearable devices, smart bracelets, and pedometers, and fixed terminals such as digital TVs and desktop computers.
  • PDA Personal Digital Assistant
  • PMP portable media players
  • Navigation devices Mobile terminals such as wearable devices, smart bracelets, and pedometers
  • Mobile terminals such as wearable devices, smart bracelets, and pedometers
  • fixed terminals such as digital TVs and desktop computers.
  • a mobile terminal will be taken as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to fixed-type terminals.
  • FIG. 1 is a schematic diagram of the hardware structure of a mobile terminal implementing various embodiments of the present application.
  • the mobile terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, an A /V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111 and other components.
  • RF Radio Frequency, radio frequency
  • the radio frequency unit 101 can be used for sending and receiving information or receiving and sending signals during a call.
  • the radio frequency unit 101 can be processed by the processor 110; in addition, the uplink data can be sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communications), GPRS (General Packet Radio Service, General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 , Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency division duplex long-term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time-division duplex long-term evolution), etc.
  • GSM Global System of Mobile communication, Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • CDMA2000 Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division
  • WiFi is a short-distance wireless transmission technology.
  • the mobile terminal can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 102, which provides users with wireless broadband Internet access.
  • Fig. 1 shows the WiFi module 102, it can be understood that it is not an essential component of the mobile terminal, and can be completely omitted as required without changing the essence of the invention.
  • the audio output unit 103 can store the audio received by the radio frequency unit 101 or the WiFi module 102 or in the memory 109 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, or the like.
  • the audio data is converted into an audio signal and output as sound.
  • the audio output unit 103 can also provide audio output related to a specific function performed by the mobile terminal 100 (eg, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 may include a speaker, a buzzer, and the like.
  • the A/V input unit 104 is used to receive audio or video signals.
  • the A/V input unit 104 may include a graphics processing unit (Graphics Processing Unit, GPU) 1041 and a microphone 1042, and the graphics processing unit 1041 is used for still pictures or The image data of the video is processed.
  • the processed image frames may be displayed on the display unit 106 .
  • the image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 .
  • the microphone 1042 can receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like operating modes, and can process such sound as audio data.
  • the processed audio (voice) data can be converted into a format transmittable to a mobile communication base station via the radio frequency unit 101 for output in case of a phone call mode.
  • the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
  • the mobile terminal 100 may also include at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the mobile terminal 100 moves to the ear. panel 1061 and/or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the application of mobile phone posture (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for mobile phones, fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, Other sensors such as thermometers and infrared sensors will not be described in detail here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1071 or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates , and then sent to the processor 110, and can receive the command sent by the processor 110 and execute it.
  • the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072 .
  • other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
  • the touch panel 1071 may cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits to the processor 110 to determine the type of the touch event, and then the processor 110 determines the touch event according to the touch event.
  • the corresponding visual output is provided on the display panel 1061 .
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated.
  • the implementation of the input and output functions of the mobile terminal is not specifically limited here.
  • the interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100 .
  • an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) transfer data between devices.
  • the memory 109 can be used to store software programs as well as various data.
  • the memory 109 can mainly include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.) etc.
  • the storage data area can be Store data (such as audio data, phone book, etc.) created according to the use of the mobile phone.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the mobile terminal, and uses various interfaces and lines to connect various parts of the entire mobile terminal, by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109 , execute various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, and optionally, the application processor mainly processes operating systems, user interfaces, and application programs, etc.
  • the modem processor primarily handles wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
  • the mobile terminal 100 may also include a power supply 111 (such as a battery) for supplying power to various components.
  • a power supply 111 such as a battery
  • the power supply 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the mobile terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
  • the following describes the communication network system on which the mobile terminal of the present application is based.
  • Fig. 2 is a kind of communication network system architecture diagram that the embodiment of the present application provides, and this communication network system is the LTE system of general mobile communication technology, and this LTE system includes the UE (User Equipment, user equipment) that communication connects sequentially ) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core Network) 203 and the operator's IP service 204.
  • UE User Equipment, user equipment
  • E-UTRAN Evolved UMTS Terrestrial Radio Access Network
  • EPC Evolved Packet Core, Evolved Packet Core Network
  • the UE 201 may be the above-mentioned terminal 100, which will not be repeated here.
  • E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and so on.
  • the eNodeB2021 can be connected to other eNodeB2022 through a backhaul (for example, X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 can provide access from the UE201 to the EPC203.
  • a backhaul for example, X2 interface
  • EPC203 may include MME (Mobility Management Entity, Mobility Management Entity) 2031, HSS (Home Subscriber Server, Home Subscriber Server) 2032, other MME2033, SGW (Serving Gate Way, Serving Gateway) 2034, PGW (PDN Gate Way, packet data Network Gateway) 2035 and PCRF (Policy and Charging Rules Function, Policy and Charging Functional Entity) 2036, etc.
  • MME2031 is a control node that processes signaling between UE201 and EPC203, and provides bearer and connection management.
  • HSS2032 is used to provide some registers to manage functions such as home location register (not shown in the figure), and save some user-specific information about service features and data rates.
  • PCRF2036 is the policy and charging control policy decision point of service data flow and IP bearer resources, it is the policy and charging execution function A unit (not shown) selects and provides available policy and charging control decisions.
  • the IP service 204 may include Internet, Intranet, IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
  • IMS IP Multimedia Subsystem, IP Multimedia Subsystem
  • LTE system is used as an example above, those skilled in the art should know that this application is not only applicable to the LTE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA and future new wireless communication systems.
  • the network system, etc. are not limited here.
  • one of the cameras is generally used to output images.
  • it may be based on some influencing factors of the camera (such as shaking), resulting in unclear images captured instantly.
  • this application proposes the first embodiment of the photographing method, including the following steps:
  • Step S10 controlling at least two of the cameras to collect image data
  • Step S20 comparing the characteristic information of the image data collected by the camera to determine the target image data whose characteristic information is better than the characteristic information of the image data collected by other cameras;
  • Step S30 displaying the target image data on a preview interface of the mobile terminal.
  • This embodiment is applied to a mobile terminal.
  • At least two cameras are provided on the mobile terminal.
  • at least two of the cameras are located at different positions on the same screen of the mobile terminal, as shown in FIG. 4 , the The cameras 11 are distributed in the horizontal direction of the display screen 1 , and the positions of the cameras 11 to collect images are different, or the angles of view of the collected images are different, or the scopes of the collected images are different.
  • the display screen 1 of the mobile terminal includes a main screen, a secondary screen and a side screen, and the cameras 11 are arranged on the main screen and the side screen. On the side screen, at least two cameras 11 can collect images from different angles.
  • the mobile terminal includes a folding screen 1, the folding screen 1 has at least two foldable screens, and at least two cameras 11 are respectively arranged on the two screens.
  • the camera function of this embodiment can be realized.
  • the mobile terminal includes a folding screen, and the cameras are respectively arranged on different foldable screens of the folding screen. Take this as an example.
  • the mobile terminal When the mobile terminal receives the photographing instruction, it controls at least two cameras to collect image data.
  • the two cameras collect image data.
  • the image data collected are different, based on this, compare the feature information of the image data collected by the camera, determine the target image data whose feature information is better than the feature information of the image data collected by other cameras; then display on the preview interface of the mobile terminal The target image data.
  • Collect images by at least two cameras (optionally, they can be collected simultaneously, or not collected at the same time, such as separately collected in a preset short period of time, etc.
  • the command to trigger the collection can be It can be the same one, or it can be multiple), which increases the probability of collecting image data with better image effects, thereby reducing the influence of a certain camera on the results of this photoshoot due to the influence of some influencing factors.
  • the user can quickly obtain the target image data, and then quickly capture the image, and can quickly collect the instantaneous image, and then compare the data collected by at least two cameras to obtain the image data with the best effect to generate the image, and the user does not need to Repeated shooting, and the shooting effect is good, improving the photo experience.
  • the image data collected by at least two of the cameras is image data of different field of view ranges, or image data of different viewing angles. Therefore, the image data collected by each of the cameras is at least partially different, and in some In the embodiment, the image data collected by the camera can be spliced to form an image with a wider field of view, such as in a group photo, which can save the need to adjust the shooting position back and forth.
  • At least two cameras are used to collect image data, and the display effect of the picture can be improved by selecting target image data with better display effect as the original data of the picture.
  • select target image data with better display effect based on the feature information of the image data collected by each of the cameras.
  • the feature information of the image data includes the number of target objects, the position of the target object, and the clarity of the target object , at least one of target object brightness and image resolution.
  • the target image data can be selected in a variety of ways, such as comparing the feature information of the image data collected by all the cameras to determine the target image data whose feature information is better than the feature information of the image data collected by other cameras Ways include at least one of the following:
  • the feature information of the image data is the number of target objects.
  • the target image data is determined according to the number of people.
  • each camera is set based on the location, and the number of people that can be collected is different.
  • the image data collected by the camera with the largest number of target objects is used as the target image data, and then displayed on the preview interface.
  • multiple cameras will function to increase the probability of obtaining all target objects, and realize different
  • the position needs to be adjusted it can also achieve the purpose of taking a group photo and improve the photo effect.
  • the feature information of the image data is the position of the target object, such as the display position of the target object relative to the preview interface.
  • the target object is a person
  • identify whether the position of the target object is the preset position of the preview interface (the middle position, or the best display position such as the position of the next two grids in the middle, etc.) and place the position of the target object in the preview interface
  • the image data at the preset position is used as the target image data, and then displayed on the preview interface, so that the image formed based on the preview data on the preview interface has a better display effect for people and improves the photographing effect.
  • the feature information of the image data is the sharpness of the target object.
  • the target object is a person
  • the clearest image data is used as the The target image data, so that when the target image data is displayed in the preview interface, it can achieve a clear display effect and improve the photographing effect.
  • the characteristic information of the image data is the brightness of the target object.
  • the target object is a person
  • the image data with the brightest brightness is used as the For the target image data, in this way, when the target image data is displayed in the preview interface, the display effect of the target object is brightened, and the photographing effect is improved.
  • the characteristic information of the image data is the image resolution of the image data, and by comparing the resolutions of the image data collected by each of the cameras, the image data with the highest resolution is used as the target image data, and the When the target image data is displayed on the preview interface, the displayed image data has a high resolution, which improves the display effect of the image, thus achieving a better photographing effect.
  • the feature information in the step of comparing the feature information of the image data collected by the camera to determine that the feature information is better than the feature information of the image data collected by other cameras can be based on the feature information set by the user, or It can also be determined based on the shooting type, such as before step S20, obtain the shooting type, determine the feature information according to the shooting type, and then compare the feature information of the image data collected by each of the cameras to determine that the feature information is better than other cameras
  • the target image data is the feature information of the collected image data. That is to say, the feature information may be different based on different settings of users, or different based on different shooting types. In this way, different ways of acquiring the target image data are used for different feature information.
  • the shooting type is a person
  • the number of target objects or the position of the target object is given a higher weight
  • the target object or the position of the target object in the image data of each of the cameras is preferentially compared, and then the number of the target objects is the largest
  • the image data of the target object is determined as the target image data; and/or the image data of the target object is at the preset position of the preview interface. Since the purpose of shooting people is mainly to display people, the target image data is determined based on the number of target objects and the position of the target objects, and the obtained image data can better meet the needs of users and improve the photographing effect.
  • the weight of characteristic information such as target object clarity or image resolution is given higher weight, and the target object definition or image resolution is preferentially compared with the image data of each of the cameras.
  • the image data with the clearest object definition is determined as the target image data, and/or the image data with the highest image resolution is used as the target image data. Since the prominence of the scene is different from that of the person, the display effect of the scene can be improved by obtaining an image with higher definition or higher image resolution, so that the photographing effect of the scene can be improved.
  • the feature information corresponding to different shooting types is different, or the feature information corresponding to different settings of the user is different.
  • image data that is more in line with the needs can be obtained as the target image data based on different needs.
  • the target image data may also be determined in combination with at least two of the above target object quantity, target object position, target object clarity, target object brightness, and image resolution.
  • the number of target objects, the position of the target object, the definition of the target object, the brightness of the target object, and the image resolution are set with weights correspondingly, and the target image data is preferentially determined based on the weight. If the weight corresponding to the number of target objects is the largest, the target image data is determined based on the number of target objects; To determine the target image data, if the weight of the position of the target object is less than the number of target objects, the target image data is determined based on the position corresponding to the target.
  • each weight corresponding to the feature information may be set based on the user, or determined or generated based on the user's usage habits or big data analysis, or different based on different shooting scenarios.
  • this embodiment can be applied to all photo-taking modes, or only to group-photo mode and/or panorama mode.
  • this embodiment when taking pictures, at least two of the cameras are controlled to collect image data; then, based on comparing the feature information of the image data collected by the cameras, it is determined that the feature information is better than the feature information of the image data collected by other cameras target image data; displaying the target image data on a preview interface of the mobile terminal. Based on selecting better image data from the image data collected by at least two cameras as the final display image, compared with using only one camera to collect images, this embodiment can avoid that the image collected by one camera is not clear and cause this shooting If the effect is poor, the shooting effect of this embodiment is better.
  • this embodiment is based on the above-mentioned first embodiment.
  • the steps to target image data include:
  • Step S21 comparing the number of target objects in the image data of all the cameras, optionally, the feature information includes the number of target objects;
  • Step S22 judging whether the number of target objects in all the image data is consistent
  • step 23 is performed, and image data with display characteristics better than those of image data captured by other cameras is used as the target image data.
  • the feature information may also include display features, and optionally, the display features include at least one of the display position of the target object, the display resolution of the target object, the display brightness of the target object, and the image resolution .
  • the feature information includes the number of target objects, and by comparing the number of target objects in the image data of all the cameras, it is determined whether the number of target objects collected by each of the cameras is consistent. For example, in a group photo, when there are 10 people in a group photo, determine whether the number captured by each camera is 10 people, or whether they are all the same number by identifying the number of target objects. If yes, it means that all the cameras have framed the people in the picture, or the number of people framed by each camera in the frame is the same. At this time, the target image data for final display cannot be determined by the number of target objects.
  • the characteristic information in this embodiment may further include display characteristics, such as at least one of the display position of the target object, the display resolution of the target object, the display brightness of the target object, and the image resolution.
  • display characteristics such as at least one of the display position of the target object, the display resolution of the target object, the display brightness of the target object, and the image resolution.
  • the target image data is determined by the display feature. For example, image data at a preset position on the preview interface based on the display position of the target object is used as the target image data. Or, for example, the image data with the highest display definition based on the target object is used as the target image data, or the image data with the brightest display brightness or the highest image resolution based on the target object is used as the target image data.
  • the target image data is determined based on the number of target objects combined with display features, so that the display effect of the target objects is better while meeting the requirement for the number of target objects.
  • step S24 to obtain the target image data with the largest number of targets
  • Step S25 identifying the target object of the target image data and other image data collected by other cameras
  • Step S26 if the target object in the other image data is inconsistent with the target object in the target image data, update the target image data by using the composite image data of the other image data and the target image data.
  • the inconsistency of the target objects means that the target objects in the target image data do not correspond to the target objects in other image data, for example, the target image data has members A, B and C, while other images
  • the data has member A and member D.
  • the member D is a target object inconsistent with the target image data.
  • each of the target objects can be identified through face recognition technology.
  • the image data collected by at least one camera includes all the people in the photo, while the image data collected by the at least one camera only includes some of the people in the photo (that is, some people are not framed in the picture). For example, if 10 people take a group photo, the image data of one camera includes 10 people, while the image data of another camera includes 7 people.
  • the image data collected by all the cameras only includes part of the people in the group photo, and the number of people collected by each of the cameras is different, and there are some overlapping people in the group photo. For example, if 10 people take a group photo, one camera captures 8 people, while the other camera captures 7 people.
  • the image data collected by all the cameras only includes part of the people in the group photo, and the number of people collected by each of the cameras is different, and there are no overlapping people. For example, if 10 people take a group photo, one camera captures 5 people, while another camera captures the other 5 people.
  • the image data with the largest number of target objects is used as the target image data, so as to ensure that the captured image contains the most target objects.
  • the object in order to further improve the shooting effect so that more or even all target objects are captured in the image, if it is determined that the number of the target objects is inconsistent by comparing the image data collected by each of the cameras, Start face recognition, identify whether the target object of other image data collected by other cameras except the target image data is consistent with the target object in the target image data, and determine other objects that are inconsistent with the target object in the target image data
  • the object is to update the target image data by using image data synthesized by the other image data and the target image data.
  • the number of target objects is inconsistent, and the image data with the largest number of target objects is compared with other image data, there may be some target objects that are not framed into the screen. After the target image data is spliced and merged, more complete target image data is formed, so that a group photo with more complete numbers of people can be displayed on the preview interface of the mobile terminal, and the photographing effect can be improved.
  • the step of updating the target image data by using the image data synthesized by the other image data and the target image data includes:
  • Step S261 acquiring a non-overlapping area relative to the target image data in the other image data
  • Step S262 splicing the image data of the non-overlapping area to the splicing position corresponding to the target image data
  • step S263 the spliced image data is used as the target image data.
  • the non-overlapping area refers to an area in which data content is different in the image data collected by each of the cameras.
  • the boundary position between the overlapping area and the non-overlapping area is the splicing position, and then performing matting processing on the non-overlapping area in the other data , and then splicing the non-overlapping area to the splicing position corresponding to the target image data to complete the splicing of the image data, and then using the spliced image data as the target image data on the preview interface of the mobile terminal show.
  • the step of stitching the image data of the non-overlapping area to the stitching position corresponding to the target image data includes:
  • the image feature includes at least one of texture and/or color
  • the similarity is determined based on at least one of texture difference and/or color difference. The smaller the texture difference and/or the smaller the color difference, the greater the similarity of the splicing positions.
  • the similarity is determined based on the difference value of the color or texture corresponding to at least one point divided by the splicing position, and there are color or texture corresponding to a preset number of points in the splicing position If the difference value is smaller than the preset difference value, then it is determined that the similarity of the spliced positions is greater than or equal to the preset threshold.
  • the adjustment is made according to the color difference or texture difference, such as using the color in the target image data as a reference, adjusting the color of the non-overlapping area in the other data, so that the color of the target image data and the color of other data
  • the color difference of the non-overlapping area is minimized; or, using the texture in the target image data as a reference, adjust the texture of the non-overlapping area in the other data, so that the texture of the target image data and other data are non-overlapping
  • the difference of the texture of the region is minimized to achieve image mosaic.
  • the splicing position is fused.
  • a natural transition fusion process is performed on the spliced position, so that the non-overlapping area is fused with the target image data, and the display effect of the image is improved.
  • This embodiment is based on splicing non-overlapping regions of the image data collected by other cameras with the target image data into the target image data, so that the display angle of the image data displayed on the preview interface is wider, and the photo effect of the group photo is improved.
  • the present embodiment is based on all the above-mentioned real-time, after the step of displaying the target image data on the preview interface of the mobile terminal, it may also include:
  • Step S40 identifying a target object at a preset position on the preview interface
  • Step S50 outputting adjustment prompt information when the target object at the preset position is incomplete.
  • the preset position may be an edge position, or other positions.
  • the adjustment prompt information includes prompt information for rotating the folding angle of the folding screen of the mobile terminal, and/or prompt information for adjusting the position of the mobile terminal.
  • this embodiment forms the target image data based on the above-mentioned embodiments, and uses it on the mobile terminal After the preview interface displays the target image data, identify the target object at the preset position of the preview interface (generally, the target object at the edge of the preview interface exists and only part of the position is collected), if the target object at the preset position is identified When the target object is incomplete, an adjustment prompt message is output.
  • face recognition is used to identify whether only part of the face is collected
  • human body recognition is used to identify whether only part of the body is collected, so as to determine that the target object is incomplete.
  • the range captured by the camera may be adjusted by prompting the user to rotate the folding angle of the folding screen, and then re-execute the above Various embodiments to regenerate target image data on the preview interface.
  • the range of images collected by the cameras located on the two screens can be adjusted, so that the overlapping area is reduced, and the viewing angles of the two cameras are increased, so that the target object at the preset position A complete image can be acquired.
  • the camera is a front camera, it is prompted to increase the folding angle, that is, to unfold the folding screen. If the camera is a rear camera, prompt to reduce the folding angle, that is, to fold the folding screen.
  • the user may be prompted to adjust the position of the mobile terminal so that the distance between the camera and the target object is increased, and the field of view of the camera is increased, so that Target objects at preset positions can be captured as a complete image.
  • the present application also provides a method for taking pictures, the method for taking pictures includes:
  • Step S210 controlling at least two of the cameras to collect image data
  • Step S220 identifying the target object of the image data collected by the camera
  • Step S230 when the target objects are inconsistent, generate the target image data according to the image data collected by all the cameras;
  • Step S240 displaying the target image data on a preview interface of the mobile terminal.
  • the inconsistency of the target objects means that the target objects in the image data collected by at least one camera do not correspond to the target objects in the image data collected by other cameras, for example, the image data collected by the first camera has A member, B member and C member, and the image data collected by the second camera has A member and D member.
  • the target objects in the image data collected by each of the cameras correspond one-to-one
  • the target objects are consistent, for example, the image data collected by the first camera has members A, B and C, and the image data collected by the second camera also has A members, B members, and C members.
  • face recognition technology is used to identify whether the target objects in the image data collected by each of the cameras are consistent.
  • the target image data can be determined based on the display characteristics of the image data, such as using image data whose display characteristics are better than those of image data collected by other cameras as the target image data, optionally,
  • the characteristic information may further include display characteristics, and the display characteristics include at least one of a display position of the target object, a display resolution of the target object, a display brightness of the target object, and an image resolution.
  • the application scenario of this embodiment is the same as the application scenario of the above-mentioned first embodiment, the difference is that in this embodiment, after controlling at least two of the cameras to collect image data, directly identify the target object in the image data collected by each of the cameras , judging whether the target objects captured by each of the cameras are consistent.
  • the target objects are inconsistent, the following situations exist (take a group photo as an example):
  • the number of target objects is the same, but all cameras capture some people, and there are unique people among the captured people. For example, if 10 people take a group photo, the image data of one camera includes 5 people, while the image data of the other camera includes 5 people.
  • the number of target objects is different, but all cameras have captured some people. For example, if 10 people take a group photo, the image data of one camera includes 6 people, while the image data of the other camera includes 8 people, and optionally only four people are duplicated.
  • the target image data is generated according to the image data collected by all the cameras, so that when the target image data is displayed on the preview interface, more pictures of the target object are presented, based on the
  • the target image is generated from the preview data on the preview interface, an image with a better photo effect can be obtained. For example, when taking a group photo, it is easier to obtain a more complete shooting position of the person taking the photo, and there is no need for the user to move back and forth to adjust the position of the terminal.
  • the step of generating the target image data according to the image data collected by all the cameras includes:
  • Target image data is generated based on the adjusted image data.
  • the target objects captured by each camera are inconsistent, it means that all the cameras only capture part of the target object. At this time, the images captured by each of the cameras are spliced to form more complete target image data.
  • each image data is directly spliced to obtain the target image data.
  • each image data collected by each of the cameras is partly the same, it means that each image data has an overlapping area and a non-overlapping area.
  • the splicing position of the collected image data is the splicing position.
  • the non-overlapping regions of the other cameras may be processed in a matting manner.
  • the image feature includes at least one of texture and/or color
  • the similarity is determined based on at least one of texture difference and/or color difference. The smaller the texture difference and/or the smaller the color difference, the greater the similarity of the splicing positions.
  • the similarity is determined based on the difference value of the color or texture corresponding to at least one point divided by the splicing position, and there are color or texture corresponding to a preset number of points in the splicing position If the difference value is smaller than the preset difference value, then it is determined that the similarity of the spliced positions is greater than or equal to the preset threshold.
  • the adjustment is made according to the color difference or texture difference, such as using the color in the target image data as a reference, adjusting the color of the non-overlapping area in the other data, so that the color of the target image data and the color of other data
  • the color difference of the non-overlapping area is minimized; or, using the texture in the target image data as a reference, adjust the texture of the non-overlapping area in the other data, so that the texture of the target image data and other data are non-overlapping
  • the difference of the texture of the region is minimized to achieve image mosaic.
  • the step of generating target image data based on the adjusted image data includes: performing fusion processing on the spliced position based on the adjusted image data; generating the target image data based on the processed image data.
  • the spliced position is fused. For example, a natural transition fusion process is performed on the spliced position, so that the non-overlapping area is fused with the target image data, and the display effect of the image is improved.
  • the target camera in this embodiment can be set by the user. If the user realizes that camera 1 is set as the main camera and other cameras are auxiliary cameras, after the image data is collected, the image data collected by other cameras Spliced into the image data collected by the main camera.
  • the target camera may be determined based on the collected image resolution, for example, the camera corresponding to the image data with high image resolution is the target camera, and the image data with low image resolution is the other camera.
  • the splicing position is processed, so that the spliced image is displayed naturally, and the display effect of the image is improved.
  • the present application also provides a mobile terminal.
  • the mobile terminal includes a memory and a processor.
  • a photographing program is stored in the memory.
  • the photographing program is executed by the processor, the steps of the photographing method in any of the foregoing embodiments are implemented.
  • the present application also provides a readable storage medium, on which a photographing program is stored, and when the photographing program is executed by a processor, the steps of the photographing method in any of the foregoing embodiments are implemented.
  • An embodiment of the present application further provides a computer program product, the computer program product includes computer program code, and when the computer program code is run on the computer, the computer is made to execute the methods in the above various possible implementation manners.
  • the embodiment of the present application also provides a chip, including a memory and a processor.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the device installed with the chip executes the above various possible implementation modes. Methods.
  • Units in the device in the embodiment of the present application may be combined, divided and deleted according to actual needs.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer instructions may be transmitted from a website site, computer, server or data center by wire (such as Coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Usable media may be magnetic media, (eg, floppy disk, memory disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), among others.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

本申请提供一种拍照方法,所述方法包括控制至少两个摄像头采集图像数据;比对所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;于移动终端的预览界面显示所述目标图像数据。本申请还包括一种移动终端和可读存储介质。本申请相对于只采用一个摄像头采集图像而言,可以避免一个摄像头采集图像不清楚则导致本次拍摄效果差的情况出现,提升移动终端的拍照效果。

Description

拍照方法、移动终端及可读存储介质 技术领域
本申请涉及通信技术领域,具体涉及一种拍照方法、移动终端及可读存储介质。
背景技术
随着科技的发展,移动终端的功能越来越多,拍照效果越来越好。如为了提高拍照效果,移动终端上设置越来越多的摄像头。即使移动终端上设有多个摄像头,拍照过程中,一般基于选择拍照模式固定采用一个摄像头输出图像,通过设置不同拍照模式采用不同摄像头输出图像的方式达到提高拍照效果的目的。在构思及实现本申请过程中,发明人发现至少存在如下问题:采用其中一个摄像头输出图像的方式,摄像头采集图像过程中可能基于摄像头的一些影响因素,导致瞬间采集的图像不清晰,此时经常需要重复拍摄多次的情况,影响拍照效果。
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。
发明内容
针对上述技术问题,本申请提供一种拍照方法、移动终端及可读存储介质,使得拍照效果提升。
为解决上述技术问题,本申请提供一种拍照方法,应用于移动终端,包括:
控制至少两个摄像头采集图像数据;
比对所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;
于所述移动终端的预览界面显示所述目标图像数据。
可选地,所述方法所应用的移动终端包括折叠屏,所述折叠屏具有至少两个可折叠的屏幕,至少两个所述摄像头分别设置于两个所述屏幕上。
可选地,所述图像数据的特征信息包括目标对象数量、目标对象位置、目标对象清晰度、目标对象亮度和图像分辨率中的至少一种。
可选地,所述比对所有所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的方式包括以下至少一个:
比对所有所述摄像头的图像数据中目标对象数量,将所述目标对象数量最多的图像数据确定为所述目标图像数据;
比对所有所述摄像头的图像数据中目标对象位置,将所述目标对象位置处于预览界面的预设位置的图像数据确定为所述目标图像数据;
比对所有所述摄像头的图像数据中目标对象清晰度,将所述目标对象清晰度最清晰的图像数据确定为所述目标图像数据;
比对所有所述摄像头的图像数据中目标对象亮度,将所述目标对象亮度最亮的图像数据确定为所述目标图像数据;
比对所有所述摄像头的图像数据的图像分辨率,将图像分辨率最高的图像数据确定为所述目标图像数据。
可选地,所述比对所有所述摄像头采集的图像数据的特征信息,确定特征信息优于其它 摄像头采集的图像数据的特征信息的目标图像数据的步骤包括:
比对所有所述摄像头的图像数据中目标对象数量,可选地,所述特征信息包括目标对象数量;
在所有所述图像数据中的目标对象数量一致时,将显示特征优于其它摄像头采集的图像数据的显示特征的图像数据作为所述目标图像数据,可选地,所述特征信息还可包括显示特征,可选地,所述显示特征包括目标对象的显示位置、目标对象的显示清晰度、目标对象的显示亮度和图像分辨率中的至少一种。
可选地,所述比对所有所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的步骤还可包括:
在所述目标对象数量不一致时,获取目标数量最多的目标图像数据;
识别所述目标图像数据和其它摄像头采集的其它图像数据的目标对象;
若具有所述其它图像数据中的目标对象与所述目标图像数据的目标对象不一致时,采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据。
可选地,所述采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据的步骤包括:
获取所述其它图像数据中相对所述目标图像数据的非重合区域;
将所述非重合区域的图像数据拼接到所述目标图像数据对应的拼接位置;
将拼接后的图像数据作为所述目标图像数据。
可选地,将所述非重合区域的图像数据拼接到所述目标图像数据对应的拼接位置的步骤包括:
将所述非重合区域的图像数据与所述目标图像数据的拼接位置对齐;
调整所述目标图像数据和所述非重合区域的图像数据之间的拼接位置的图像特征,使得所述拼接位置的相似度大于或等于预设阈值。
可选地,所述图像特征包括纹理和/或色彩中的至少一个,所述相似度基于纹理差异和/或色彩差异中的至少一个确定。
可选地,所述于所述移动终端的预览界面显示所述目标图像数据的步骤之后,还可包括:
识别所述预览界面的预设位置的目标对象;
在所述预设位置的目标对象不完整时,输出调整提示信息。
可选地,所述调整提示信息包括旋转所述移动终端的折叠屏的折叠角度的提示信息,和/或调整移动终端位置的提示信息。
本申请还提供一种拍照方法,所述拍照方法包括:
控制至少两个所述摄像头采集图像数据;
识别所述摄像头采集的图像数据的目标对象;
在所述目标对象不一致时,根据所有所述摄像头采集的图像数据生成所述目标图像数据;
于所述移动终端的预览界面显示所述目标图像数据。
可选地,所述根据所有所述摄像头采集的图像数据生成所述目标图像数据的步骤包括:
获取其它摄像头采集的图像数据中与目标摄像头采集的图像数据不重叠的非重叠区;
将所述非重合区域对齐到所述目标摄像头采集的图像数据的拼接位置;
调整所述拼接位置的图像特征,使得所述拼接位置的相似度大于或等于预设阈值;
基于调整后的图像数据生成目标图像数据。
可选地,所述基于调整后的图像数据生成目标图像数据的步骤包括:
基于调整后的图像数据对所述拼接位置进行融合处理;
基于处理后的图像数据生成所述目标图像数据。
本申请还提供一种移动终端,包括:存储器、处理器,其中,所述存储器上存储有拍照程序,所述拍照程序被所述处理器执行时实现如上任一所述方法的步骤。
本申请还提供一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上任一所述方法的步骤。
如上所述,本申请的拍照方法,应用于移动终端,在拍照时,控制至少两个所述摄像头采集图像数据;然后基于比对所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;于所述移动终端的预览界面显示所述目标图像数据。基于从至少两个摄像头采集的图像数据中选择更优的图像数据作为最终的显示图像,相对于只采用一个摄像头采集图像而言,本申请可以避免一个摄像头采集图像不清楚则导致本次拍摄效果差的情况出现,提升移动终端的拍照效果。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为实现本申请各个实施例的一种移动终端的硬件结构示意图;
图2为本申请实施例提供的一种通信网络系统架构图;
图3是根据第一实施例示出的拍照方法的流程示意图;
图4是根据第一实施例示出的移动终端的一实施例结构示意图;
图5是根据第一实施例示出的移动终端另一实施例结构示意图;
图6是根据第一实施例示出的移动终端又一实施例结构示意图
图7是根据第二实施例示出的拍照方法的步骤S20一实施例细化流程示意图;
图8是根据第三实施例示出的拍照方法的步骤S26的一实施例细化流程示意图;
图9是根据第四实施例示出的拍照方法的的流程示意图;
图10是根据第五实施例示出的拍照方法的的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性 的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还可包括没有明确列出的其他要素,或者是还可包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释或者进一步结合该具体实施例中上下文进行确定。
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语"如果"可以被解释成为"在……时"或"当……时"或"响应于确定"。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。例如,“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,在本文中,采用了诸如S10、S20等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执行S20后执行S10等,但这些均应在本申请的保护范围之内。
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或者“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或者“单元”可以混合地使用。
移动终端可以以各种形式来实施。例如,本申请中描述的移动终端可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、导航装置、可穿戴设备、智能手环、计步器 等移动终端,以及诸如数字TV、台式计算机等固定终端。
后续描述中将以移动终端为例进行说明,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。
请参阅图1,其为实现本申请各个实施例的一种移动终端的硬件结构示意图,该移动终端100可以包括:RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对移动终端的各个部件进行具体的介绍:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,可选地,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA2000(Code Division Multiple Access 2000,码分多址2000)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、TD-SCDMA(Time Division-Synchronous Code Division Multiple Access,时分同步码分多址)、FDD-LTE(Frequency Division Duplexing-Long Term Evolution,频分双工长期演进)和TDD-LTE(Time Division Duplexing-Long Term Evolution,分时双工长期演进)等。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
音频输出单元103可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号 的过程中产生的噪声或者干扰。
移动终端100还可包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。可选地,光传感器包括环境光传感器及接近传感器,可选地,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在移动终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。可选地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。可选地,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。可选地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。
可选地,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元108用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,可选地,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易 失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器110可包括一个或多个处理单元;可选地,处理器110可集成应用处理器和调制解调处理器,可选地,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
移动终端100还可以包括给各个部件供电的电源111(比如电池),可选地,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,移动终端100还可以包括蓝牙模块等,在此不再赘述。
为了便于理解本申请实施例,下面对本申请的移动终端所基于的通信网络系统进行描述。
请参阅图2,图2为本申请实施例提供的一种通信网络系统架构图,该通信网络系统为通用移动通信技术的LTE系统,该LTE系统包括依次通讯连接的UE(User Equipment,用户设备)201,E-UTRAN(Evolved UMTS Terrestrial Radio Access Network,演进式UMTS陆地无线接入网)202,EPC(Evolved Packet Core,演进式分组核心网)203和运营商的IP业务204。
可选地,UE201可以是上述终端100,此处不再赘述。
E-UTRAN202包括eNodeB2021和其它eNodeB2022等。可选地,eNodeB2021可以通过回程(back haul)(例如X2接口)与其它eNodeB2022连接,eNodeB2021连接到EPC203,eNodeB2021可以提供UE201到EPC203的接入。
EPC203可以包括MME(Mobility Management Entity,移动性管理实体)2031,HSS(Home Subscriber Server,归属用户服务器)2032,其它MME2033,SGW(Serving Gate Way,服务网关)2034,PGW(PDN Gate Way,分组数据网络网关)2035和PCRF(Policy and Charging Rules Function,政策和资费功能实体)2036等。可选地,MME2031是处理UE201和EPC203之间信令的控制节点,提供承载和连接管理。HSS2032用于提供一些寄存器来管理诸如归属位置寄存器(图中未示)之类的功能,并且保存有一些有关服务特征、数据速率等用户专用的信息。所有用户数据都可以通过SGW2034进行发送,PGW2035可以提供UE 201的IP地址分配以及其它功能,PCRF2036是业务数据流和IP承载资源的策略与计费控制策略决策点,它为策略与计费执行功能单元(图中未示)选择及提供可用的策略和计费控制决策。
IP业务204可以包括因特网、内联网、IMS(IP Multimedia Subsystem,IP多媒体子系统)或其它IP业务等。
虽然上述以LTE系统为例进行了介绍,但本领域技术人员应当知晓,本申请不仅仅适用于LTE系统,也可以适用于其他无线通信系统,例如GSM、CDMA2000、WCDMA、TD-SCDMA以及未来新的网络系统等,此处不做限定。
基于上述移动终端硬件结构、通信网络系统,以及示例技术中,一般都是采用其中一个摄像头输出图像,摄像头采集图像过程中可能基于摄像头的一些影响因素(如抖动),导致瞬间采集的图像不清晰,此时经常需要重复拍摄多次的情况,或者,基于一个摄像头能够采集的范围有限,需要不断的调整拍摄位置才能够采集到用户想要采集的范围,容易错过瞬间 即逝的画面,影响拍照效果,提出本申请各个实施例。
第一实施例:
请参照图3,本申请提出拍照方法的第一实施例,包括以下步骤:
步骤S10,控制至少两个所述摄像头采集图像数据;
步骤S20,比对所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;
步骤S30,于所述移动终端的预览界面显示所述目标图像数据。
本实施例应用于移动终端,所述移动终端上设有至少两个摄像头,可选地,至少两个所述摄像头位于所述移动终端的同一屏幕的不同位置,如图4所示,所述摄像头11分布在所述显示屏1的水平方向上,所述摄像头11采集图像的位置不同,或者采集图像的视角不同,或者采集的视场范围不同。或者,如图5所示,至少两个所述摄像头11位于所述移动终端的不同屏幕上,移动终端的显示屏1包括主屏幕、副屏幕和侧屏幕,所述摄像头11设置在主屏幕和侧屏幕上,至少两个所述摄像头11可采集不同角度的图像。又或者,如图6所示,所述移动终端包括折叠屏1,所述折叠屏1具有至少两个可折叠的屏幕,至少两个所述摄像头11分别设置于两个所述屏幕上。可以理解的是,本实施例中移动终端为上述结构时,均能实现本实施例的拍照功能,以下以所述移动终端包括折叠屏,所述摄像头分别设置于折叠屏的可折叠的不同屏幕上为例进行说明。
移动终端在接收到拍照指令时,控制至少两个所述摄像头采集图像数据,两个所述摄像头采集图像数据,基于摄像头的位置不同、或者视角不同、或者视场范围不同,两个所述摄像头采集的图像数据不同,基于此,对比所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;然后于所述移动终端的预览界面显示所述目标图像数据。由至少两个摄像头采集图像(可选地,可以是同时采集,也可以不同时采集,如在预设较短时间内分别采集等,可选地,若不是同时采集,则触发采集的指令可以是同一个,也可以是多个),增加采集到的图像效果更佳的图像数据的概率,从而减少某个摄像头受一些影响因素影响导致拍照效果不佳对本次拍照结果的影响。如此,用户可以快速获取到目标图像数据,进而快速拍摄图像,可以快速采集瞬间图像,然后基于至少两个摄像头分别采集到的数据进行比对后获得效果最佳的图像数据来生成图像,用户无需重复拍摄,且拍摄效果佳,提升拍照体验。
可选地,至少两个所述摄像头所采集的图像数据为不同视场范围的图像数据,或者为不同视角的图像数据,因此,每个所述摄像头采集的图像数据至少有部分不同,在一些实施例中,可以将摄像头采集的图像数据拼接以形成视场范围更广的图像,如应用在合照中,可以省去来回调整拍摄位置。
可选地,本实施例采用至少两个摄像头采集图像数据,通过选取显示效果更佳的目标图像数据来作为图片的原始数据,可以提高图片的显示效果。可选地基于各个所述摄像头所采集的图像数据的特征信息选取显示效果更佳的目标图像数据,可选地,所述图像数据的特征信息包括目标对象数量、目标对象位置、目标对象清晰度、目标对象亮度和图像分辨率中的至少一种。
可选地,目标图像数据的选取方式由多种,如所述比对所有所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的方 式包括以下至少一个:
(一)比对所有所述摄像头的图像数据中目标对象数量,将所述目标对象数量最多的图像数据确定为所述目标图像数据。
也即所述图像数据的特征信息为目标对象数量,如拍摄人时,根据人的数量来确定目标图像数据,尤其是大合照中,各个摄像头基于位置设置,能采集到的人数不同,本实施例将所述目标对象数量最多的摄像头采集到的图像数据作为所述目标图像数据,然后于预览界面上显示,如此,拍合照时,多个摄像头作用,提高获取全部目标对象的概率,实现不需要调整位置的情况下,也能够达到合照的目的,提高拍照效果。
(二)比对所有所述摄像头的图像数据中目标对象位置,将所述目标对象位置处于预览界面的预设位置的图像数据确定为所述目标图像数据;
可选地,所述图像数据的特征信息为目标对象位置,如所述目标对象相对于预览界面的显示位置。如所述目标对象为人时,识别所述目标对象所在的位置是否为预览界面的预设位置(中间位置,或者最佳显示位置如中间下两格位置等),将目标对象位置处于预览界面的预设位置的图像数据作为目标图像数据,然后于预览界面进行显示,使得基于预览界面的预览数据形成的图片,使得人的显示效果更佳,提高拍照效果。
(三)比对所有所述摄像头的图像数据中目标对象清晰度,将所述目标对象清晰度最清晰的图像数据确定为所述目标图像数据。
可选地,所述图像数据的特征信息为目标对象清晰度,如所述目标对象为人时,通过识别各个所述摄像头采集的图像数据中人的清晰度,将最清晰的图像数据作为所述目标图像数据,使得所述目标图像数据于预览界面中显示时,达到清晰的显示效果,提高拍照效果。
(四)比对所有所述摄像头的图像数据中目标对象亮度,将所述目标对象亮度最亮的图像数据确定为所述目标图像数据。
可选地,所述图像数据的特征信息为目标对象亮度,如所述目标对象为人时,通过比对各个所述摄像头采集的图像数据中人的亮度,将亮度最亮的图像数据作为所述目标图像数据,如此,于所述预览界面中显示所述目标图像数据时,达到提亮目标对象的显示效果,提高拍照效果。
(五)比对所有所述摄像头的图像数据的图像分辨率,将图像分辨率最高的图像数据确定为所述目标图像数据。
可选地,所述图像数据的特征信息为图像数据的图像分辨率,通过比对各个所述摄像头采集的图像数据的分辨率,将分辨率最高的图像数据作为所述目标图像数据,于所述预览界面中显示所述目标图像数据时,所显示的图像数据的分辨率高,提高图像的显示效果,如此,达到更佳的拍照效果。
可选地,所述比对所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据步骤中的特征信息可以基于由用户设定,或者还可以基于拍摄类型确定,如步骤S20之前,获取拍照类型,根据所述拍照类型确定特征信息,进而比对各个所述摄像头采集的图像数据的所述特征信息,以确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据。也即所述特征信息可以基于用户不同设定不同而不同,或者基于拍摄类型不同而不同,如此,不同的特征信息采用不同的方式获取所述目标图像数据。
如在拍摄类型为人物时,则赋予目标对象数量或目标对象位置的权重更高,优先通过比对各个所述摄像头的图像数据中的目标对象或目标对象位置,然后将所述目标对象数量最多的图像数据确定为所述目标图像数据;和/或将所述目标对象位置处于预览界面的预设位置的图像数据。由于拍摄人时,拍摄目的主要是为了显示人,因此基于目标对象数量和目标对象位置来确定目标图像数据,所获得的图像数据更满足用户需求,提高拍照效果。
如拍摄类型为景物,则赋予目标对象清晰度或图像分辨率等特征信息的权重更高,优先通过比对各个所述摄像头的图像数据中目标对象清晰度或图像分辨率等,将所述目标对象清晰度最清晰的图像数据确定为所述目标图像数据,和/或将图像分辨率最高的图像数据作为所述目标图像数据。由于景物的突显性与人物不同,通过获取清晰度更高,或者图像分辨率更高的图像,可以提高景物的显示效果,如此,提高景物的拍照效果。
本实施例基于拍摄类型不同对应的特征信息不同,或者基于用户的不同设定对应的特征信息不同,如此,在拍照过程中,可以基于不同需求对应获取更符合需求的图像数据作为目标图像数据,提高图像显示效果更佳的获取概率,提高拍照效果。
可选地,所述目标图像数据还可以结合上述目标对象数量、目标对象位置、目标对象清晰度、目标对象亮度和图像分辨率中的至少两个来确定。可选地,所述目标对象数量、目标对象位置、目标对象清晰度、目标对象亮度和图像分辨率对应设置有权重,基于权重大小优先确定目标图像数据。如目标对象数量对应的权重最大时,则以所述目标对象数量来确定目标图像数据,若以所述目标对象数量确定的目标图像数据包括多个时,则根据权重小于目标对象数量的特征信息来确定目标图像数据,如所述目标对象的位置的权重小于目标对象数量,则以所述目标对应的位置来确定目标图像数据。
可选地,所述特征信息对应的各个权重可以基于用户设定,或者基于用户使用习惯或大数据分析确定或生成,或者基于不同的拍摄场景不同而不同。
可选地,本实施例可应用于所有拍照模式中,也可以仅应用在合照模式和/或全景模式中。
本实施例中,在拍照时,控制至少两个所述摄像头采集图像数据;然后基于比对所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;于所述移动终端的预览界面显示所述目标图像数据。基于从至少两个摄像头采集的图像数据中选择更优的图像数据作为最终的显示图像,相对于只采用一个摄像头采集图像而言,本实施例可以避免一个摄像头采集图像不清楚则导致本次拍摄效果差的情况出现,本实施例的拍摄效果更佳。
第二实施例:
请参照图7,本实施例基于上述第一实施例,可选地,所述比对所有所述摄像头采集的图像数据的特征信息,确定特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的步骤包括:
步骤S21,比对所有所述摄像头的图像数据中目标对象数量,可选地,所述特征信息包括目标对象数量;
步骤S22,判断所有所述图像数据中的目标对象数量是否一致;
若是,也即在所有所述图像数据中的目标对象数量一致时,执行步骤23,将显示特征优于其它摄像头采集的图像数据的显示特征的图像数据作为所述目标图像数据。
可选地,所述特征信息还可包括显示特征,可选地,所述显示特征包括目标对象的显示 位置、目标对象的显示清晰度、目标对象的显示亮度和图像分辨率中的至少一种。
本实施例中,所述特征信息包括目标对象数量,通过比对所有所述摄像头的图像数据中的目标对象数量,确定各个所述摄像头所采集的目标对象数量是否为一致的。如合照中,有10人合照时,通过识别目标对象数量确定各个摄像头所采集到的数量是否为10人,或者是否均为一个相同的数量。若是,则说明所有摄像头均把合照的人框选在画面中了,或者各个摄像头框选在画面中的人数一致,此时无法通过目标对象数量来确定用于最终显示的目标图像数据。
基于此,本实施例中的特征信息还可包括显示特征,如目标对象的显示位置、目标对象的显示清晰度、目标对象的显示亮度和图像分辨率中的至少一种。在通过目标对象数量无法获取目标图像数据时,通过所述显示特征确定目标图像数据。如基于目标对象的显示位置处于预览界面的预设位置的图像数据作为目标图像数据。或者如基于目标对象的显示清晰度最高的图像数据作为所述目标图像数据,或者基于目标对象显示亮度最亮或者图像分辨率最高的图像数据作为所述目标图像数据。
因此,本实施例基于目标对象数量结合显示特征确定目标图像数据,在满足目标对象数量要求的同时,使得目标对象的显示效果更佳。
可选地,若否,也即在所述目标对象数量不一致时,执行步骤S24,获取目标数量最多的目标图像数据;
步骤S25,识别所述目标图像数据和其它摄像头采集的其它图像数据的目标对象;
步骤S26,若具有所述其它图像数据中的目标对象与所述目标图像数据的目标对象不一致时,采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据。
需要说明的是,所述目标对象不一致是指目标图像数据中的目标对象与其它图像数据中的目标对象不一一对应,如目标图像数据中具有A成员、B成员和C成员,而其它图像数据中具有A成员和D成员,可选地,所述D成员为与所述目标图像数据不一致的目标对象。本实施例可通过人脸识别技术识别各个所述目标对象。
在合照过程中,若所述目标对象数量不一致,一般存在以下几种情况:
第一,至少一个摄像头采集的图像数据中包括所有合照人员,而至少一个摄像头采集的图像数据只包括部分合照人员(也即部分人员没有框选在画面中)。如10人合照,而一个摄像头的图像数据中包括10个人,而另一个摄像头的图像数据中包括7个人。
第二,所有摄像头采集的图像数据均只包含部分合照人员,各个所述摄像头所采集到的人数不同,且具有重叠的部分合照人。如10人合照,一摄像头采集到8个人,而另一个摄像头采集到7个人。
第三,所有摄像头采集的图像数据均只包含部分合照人员,各个所述摄像头所采集到的人数不同,且不不具有重叠的人。如10人合照,一摄像头采集到5个人,而另一个摄像头采集到另外5个人。
在一些可选实施例中,将目标对象数量最多的图像数据作为目标图像数据,以确保所拍摄的图像中包含的目标对象最多。
而在一些可选实施例中,为了进一步提高拍摄效果,使得更多甚至全部的目标对象均被拍摄在图像中,若比对各个所述摄像头采集的图像数据确定所述目标对象数量不一致时,启动人脸识别,识别除目标图像数据之外的其它摄像头采集的其它图像数据的目标对象与所述 目标图像数据中的目标对象是否一致,确定与所述目标图像数据中的目标对象不一致的其它对象,采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据。
在本实施例中,若目标对象数量不一致,目标对象数量最多的图像数据与其它图像数据对比,可能存在有部分目标对象没有框选到画面中,此时为了确保完成合照,将其它图像数据与所述目标图像数据拼接合并后,形成更完整的目标图像数据,使得于移动终端的预览界面中显示人数更全的合照,提高拍照效果。
第三实施例:
请参照图8,本实施例基于上述第二实施例,可选地,所述采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据的步骤包括:
步骤S261,获取所述其它图像数据中相对所述目标图像数据的非重合区域;
步骤S262,将所述非重合区域的图像数据拼接到所述目标图像数据对应的拼接位置;
步骤S263,将拼接后的图像数据作为所述目标图像数据。
在确定所述其它图像数据中具有与目标图像数据中的目标对象不同的其它对象时,则说明所述其它图像数据与所述目标图像数据之间存在非重叠区域。也即所述非重叠区域是指各个所述摄像头所采集到的图像数据中,数据内容不相同的区域。
扫描各个所述图像数据,以获得非重叠区域和重叠区域,所述重叠区域和非重叠区域之间的分界位置为所述拼接位置,然后对所述其它数据中的非重叠区域进行抠图处理,再将所述非重叠区域拼接到所述目标图像数据对应的所述拼接位置,完成图像数据的拼接,然后将拼接后的图像数据作为所述目标图像数据,于所述移动终端的预览界面显示。
可选地,将所述非重合区域的图像数据拼接到所述目标图像数据对应的拼接位置的步骤包括:
将所述非重合区域的图像数据与所述目标图像数据的拼接位置对齐;
调整所述目标图像数据和所述非重合区域的图像数据之间的拼接位置的图像特征,使得所述拼接位置的相似度大于或等于预设阈值。
为了使得拼接后的图像融合和自然,在拼接图像过程中,将所述非重叠区域的图像数据对齐到所述目标图像数据的对应位置后,调整所述拼接位置的图像纹理和/或色彩,使得所述拼接位置的相似度接近相似。
可选地,所述图像特征包括纹理和/或色彩中的至少一个,所述相似度基于纹理差异和/或色彩差异中的至少一个确定。所述纹理差异越小和/或所述色彩差异越小,所述拼接位置的相似度越大。
在一实施例中,所述相似度是基于所述拼接位置划分的至少一个点对应的色彩或纹理的差异值确定的,在所述拼接位置中具有达到预设数量的点对应的色彩或纹理的差异值小于预设差值,则判定所述拼接位置的相似度大于或等于所述预设阈值。
在一些实施例中,所述相似度是基于所述拼接位置划分的至少一个点对应的色彩和/或纹理的差异值的和确定的。如获取一摄像头对应的点的色彩和/或纹理与另一摄像头对应的点的色彩和/或纹理的差异值,把纹理差异和/或色彩差异这两个差异相加作为对比数值,如一摄像头的第i点和另一摄像头的第i点的纹理和/或图像色彩的绝对值差异值C=|Ti1-Ti2|+|Ci1-Ci2|越小,相似度越相似。
具体调节过程中,根据色彩差异或纹理差异进行调整,如以目标图像数据中的色彩作 为基准,调整所述其它数据中的非重叠区域的色彩,使得所述目标图像数据的色彩和其它数据的非重叠区域的色彩的差值最小化;或者,以目标图像数据中的纹理作为基准,调整所述其它数据中的非重叠区域的纹理,使得所述目标图像数据的纹理和其它数据的非重叠区域的纹理的差值最小化,实现图像拼接。
可选地,调整所述目标图像数据和所述非重合区域的图像数据之间的拼接位置的图像特征后,融合所述拼接位置。如对所述拼接位置进行自然过渡融合处理,使得非重叠区域与所述目标图像数据融合,提高图像的显示效果。
本实施例基于将其它摄像头采集的图像数据中与目标图像数据非重叠区域拼接到所述目标图像数据中,使得预览界面显示的图像数据的显示视角更广,提高合照的拍照效果。
第四实施例:
请参照图9,本实施例基于上述所有实时,所述于所述移动终端的预览界面显示所述目标图像数据的步骤之后,还可包括:
步骤S40,识别所述预览界面的预设位置的目标对象;
步骤S50,在所述预设位置的目标对象不完整时,输出调整提示信息。
可选地,所述预设位置可以为边缘位置,或其他位置。
可选地,所述调整提示信息包括旋转所述移动终端的折叠屏的折叠角度的提示信息,和/或调整移动终端位置的提示信息。
以所述目标对象为人为例进行说明:
为了避免人的部分身体未框选入所述预览界面,导致部分人只采集到部分身体,影响拍照效果,本实施例基于上述各个实施例形成所述目标图像数据,并于所述移动终端的预览界面显示所述目标图像数据后,识别所述预览界面的预设位置的目标对象(一般是位于预览界面边缘处的目标对象存在只采集到部分位置),若识别到所述预设位置的目标对象不完整时,输出调整提示信息。
可选地,本实施例采用人脸识别的方式识别是否只采集到部分人脸,或者采用人体识别的方式识别是否只采集到部分身体,进而确定所述目标对象不完整。
可选地,在折叠屏实施例中,在所述预设位置的目标对象不完整时,可以通过提示用户旋转折叠屏的折叠角度的方式,调整所述摄像头所采集的范围,进而重新执行上述各个实施例以在所述预览界面上重新生成目标图像数据。需要说明的是,所述折叠屏的折叠角度调整后,可以调整位于两个屏幕上的摄像头采集的图像范围,使得重叠区域减小,增加两个摄像头的采集视角,使得预设位置的目标对象能够被采集到完整的图像。可以理解的是,若所述摄像头为前置摄像头,则提示增大所述折叠角度,也即展开所述折叠屏。若所述摄像头为后置摄像头,则提示减小所述折叠角度,也即折叠所述折叠屏。
在非折叠屏实施例中,在所述预设位置的目标对象不完整时,可以通过提示用户调整移动终端位置,使得摄像头与目标对象的距离更有,增大摄像头的视场范围,从而使得预设位置的目标对象能够被采集到完整的图像。
第五实施例:
请参照图10,本申请还提供一种拍照方法,所述拍照方法包括:
步骤S210,控制至少两个所述摄像头采集图像数据;
步骤S220,识别所述摄像头采集的图像数据的目标对象;
步骤S230,在所述目标对象不一致时,根据所有所述摄像头采集的图像数据生成所述目标图像数据;
步骤S240,于所述移动终端的预览界面显示所述目标图像数据。
需要说明的是,所述目标对象不一致是指至少一个摄像头所采集的图像数据中的目标对象与其它摄像头采集的图像数据中的目标对象不一一对应,如第一摄像头采集的图像数据中具有A成员、B成员和C成员,而第二摄像头采集的图像数据中具有A成员和D成员。
各个所述摄像头采集的图像数据中的目标对象一一对应时,所述目标对象一致,如第一摄像头采集的图像数据中具有A成员、B成员和C成员,而第二摄像头采集的图像数据中也具有A成员、B成员和C成员。
可选地,本实施例通过人脸识别技术识别各个所述摄像头所采集的图像数据中的所述目标对象是否一致。
在所述目标对象一致时,可以基于图像数据的显示特征来确定目标图像数据,如将显示特征优于其它摄像头采集的图像数据的显示特征的图像数据作为所述目标图像数据,可选地,所述特征信息还可包括显示特征,所述显示特征包括目标对象的显示位置、目标对象的显示清晰度、目标对象的显示亮度和图像分辨率中的至少一种。
本实施例应用场景与上述第一实施例应用场景相同,不同的是,本实施例在控制至少两个所述摄像头采集图像数据后,直接识别各个所述摄像头所采集的图像数据中的目标对象,判断各个所述摄像头所采集的目标对象是否一致。目标对象不一致时,存在以下情况(以合照为例):
1、目标对象数量相同,但是所有摄像头都采集到部分人员,且所采集到的人中存在不重复的人。如10人合照,一个摄像头的图像数据中包括5个人,而另一个摄像头的图像数据中包括5个人。
2、目标对象数量不同,但是所有摄像头都采集到部分人员。如10个人合照,一个摄像头的图像数据中包括6个人,而另一个摄像头的图像数据中包含8个人,可选地只有四个人是重复的。
基于此,在所述目标对象不一致时,根据所有所述摄像头采集的图像数据生成所述目标图像数据,使得所述目标图像数据于预览界面显示时,呈现目标对象更多的画面,基于所述预览界面的预览数据生成目标图像时,能够获得拍照效果更佳的图像,如合照时,更容易获得合照人员更全的拍摄位置,无需用户来回移动调整终端的位置。
第六实施例:
本实施例基于上述第五实施例,可选地,所述根据所有所述摄像头采集的图像数据生成所述目标图像数据的步骤包括:
获取其它摄像头采集的图像数据中与目标摄像头采集的图像数据不重叠的非重叠区;
将所述非重合区域对齐到所述目标摄像头采集的图像数据的拼接位置;
调整所述拼接位置的图像特征,使得所述拼接位置的相似度大于或等于预设阈值;
基于调整后的图像数据生成目标图像数据。
在确定各个摄像头所采集的目标对象不一致时,则说明所有摄像头均只采集到部分目标对象,此时,将各个所述摄像头采集到的图像进行拼接,以形成更完整的目标图像数据。
可选地,在各个所述摄像头采集的图像数据中不具有重合区域时,则直接将各个图像数 据进行拼接,以获得目标图像数据。
可选地,若各个所述摄像头采集到的图像数据部分相同时,则说明各个图像数据具有重叠区和非重叠区。预先确定目标摄像头所采集的图像数据,然后扫描其它摄像头采集的图像数据,获取到与所述目标摄像头采集的图像数据不重叠的非重叠区域,然后将所述非重叠区域对齐到所述目标摄像头采集的图像数据的拼接位置。可选地,所述图像数据的非重叠区域与所述重叠区域之间的分界位置为所述拼接位置。
可选地,可以采用抠图的方式对所述其它摄像头的非重叠区域进行处理。
为了使得拼接后的图像融合和自然,在拼接图像过程中,将所述非重叠区域的图像数据对齐到所述目标摄像头采集的图像数据的对应位置后,调整所述拼接位置的图像纹理和/或色彩,使得所述拼接位置的相似度接近相似。
可选地,所述图像特征包括纹理和/或色彩中的至少一个,所述相似度基于纹理差异和/或色彩差异中的至少一个确定。所述纹理差异越小和/或所述色彩差异越小,所述拼接位置的相似度越大。
在一实施例中,所述相似度是基于所述拼接位置划分的至少一个点对应的色彩或纹理的差异值确定的,在所述拼接位置中具有达到预设数量的点对应的色彩或纹理的差异值小于预设差值,则判定所述拼接位置的相似度大于或等于所述预设阈值。
在一些实施例中,所述相似度是基于所述拼接位置划分的至少一个点对应的色彩和/或纹理的差异值的和确定的。如获取一摄像头对应的点的色彩和/或纹理与另一摄像头对应的点和纹理的差异值,把纹理差异和/或色彩差异这两个差异相加作为对比数值,如一摄像头的第i点和另一摄像头的第i点的纹理和/或图像色彩的绝对值差异值C=|Ti1-Ti2|+|Ci1-Ci2|越小,相似度越相似。
具体调节过程中,根据色彩差异或纹理差异进行调整,如以目标图像数据中的色彩作为基准,调整所述其它数据中的非重叠区域的色彩,使得所述目标图像数据的色彩和其它数据的非重叠区域的色彩的差值最小化;或者,以目标图像数据中的纹理作为基准,调整所述其它数据中的非重叠区域的纹理,使得所述目标图像数据的纹理和其它数据的非重叠区域的纹理的差值最小化,实现图像拼接。
可选地,所述基于调整后的图像数据生成目标图像数据的步骤包括:基于调整后的图像数据对所述拼接位置进行融合处理;基于处理后的图像数据生成所述目标图像数据。调整所述目标图像数据和所述非重合区域的图像数据之间的拼接位置的图像特征后,融合所述拼接位置。如对所述拼接位置进行自然过渡融合处理,使得非重叠区域与所述目标图像数据融合,提高图像的显示效果。
可选地,本实施例中的目标摄像头可以为用户设定,如用户实现设定摄像头1为主摄像头,其它摄像头为辅助摄像头,则在采集到图像数据后,将其它摄像头采集到的图像数据拼接到主摄像头采集到的图像数据中。
或者,所述目标摄像头可以基于所采集的图像分辨率确定,如图像分辨率高的图像数据对应的摄像头为所述目标摄像头,图像分辨率低的为所述其它摄像头。
本实施例在图像拼接过程中,对拼接位置进行处理,使得拼接后的图像显示自然,提高图像的显示效果。
本申请还提供一种移动终端,移动终端包括存储器、处理器,存储器上存储有拍照程序, 拍照程序被处理器执行时实现上述任一实施例中的拍照方法的步骤。
本申请还提供一种可读存储介质,可读存储介质上存储有拍照程序,拍照程序被处理器执行时实现上述任一实施例中的拍照方法的步骤。
本申请实施例还提供一种计算机程序产品,计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中的方法。
本申请实施例还提供一种芯片,包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如上各种可能的实施方式中的方法。
可以理解,上述场景仅是作为示例,并不构成对于本申请实施例提供的技术方案的应用场景的限定,本申请的技术方案还可应用于其他场景。例如,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例设备中的单元可以根据实际需要进行合并、划分和删减。
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。
在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数 据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD),或者半导体介质(例如固态存储盘Solid State Disk(SSD))等。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (13)

  1. 一种拍照方法,应用于移动终端,其特征在于,所述方法包括:
    控制至少两个摄像头采集图像数据;
    比对所述摄像头采集的图像数据的特征信息,确定所述特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据;
    于所述移动终端的预览界面显示所述目标图像数据。
  2. 如权利要求1所述的方法,其特征在于,所述图像数据的特征信息包括目标对象数量、目标对象位置、目标对象清晰度、目标对象亮度和图像分辨率中的至少一种。
  3. 如权利要求2所述的方法,其特征在于,所述比对所有所述摄像头采集的图像数据的特征信息,确定所述特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的方式包括以下至少一个:
    比对所有所述摄像头的图像数据中目标对象数量,将所述目标对象数量最多的图像数据确定为所述目标图像数据;
    比对所有所述摄像头的图像数据中目标对象位置,将所述目标对象位置处于预览界面的预设位置的图像数据确定为所述目标图像数据;
    比对所有所述摄像头的图像数据中目标对象清晰度,将所述目标对象清晰度最清晰的图像数据确定为所述目标图像数据;
    比对所有所述摄像头的图像数据中目标对象亮度,将所述目标对象亮度最亮的图像数据确定为所述目标图像数据;
    比对所有所述摄像头的图像数据的图像分辨率,将图像分辨率最高的图像数据确定为所述目标图像数据。
  4. 如权利要求1至3中任一项所述的方法,其特征在于,所述比对所有所述摄像头采集的图像数据的特征信息,确定所述特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的步骤包括:
    比对所有所述摄像头的图像数据中目标对象数量;
    在所有所述图像数据中的目标对象数量一致时,将显示特征优于其它摄像头采集的图像数据的显示特征的图像数据作为所述目标图像数据。
  5. 如权利要求4所述的方法,其特征在于,所述比对所有所述摄像头采集的图像数据的特征信息,确定所述特征信息优于其它摄像头采集的图像数据的特征信息的目标图像数据的步骤还包括:
    在所述目标对象数量不一致时,获取目标数量最多的目标图像数据;
    识别所述目标图像数据和其它摄像头采集的其它图像数据的目标对象;
    若具有所述其它图像数据中的目标对象与所述目标图像数据的目标对象不一致时,采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据。
  6. 如权利要求5所述的方法,其特征在于,所述采用所述其它图像数据和所述目标图像数据合成后的图像数据更新所述目标图像数据的步骤包括:
    获取所述其它图像数据中相对所述目标图像数据的非重合区域;
    将所述非重合区域的图像数据拼接到所述目标图像数据对应的拼接位置;
    将拼接后的图像数据作为所述目标图像数据。
  7. 如权利要求6所述的方法,其特征在于,将所述非重合区域的图像数据拼接到所述目标图像数据对应的拼接位置的步骤包括:
    将所述非重合区域的图像数据与所述目标图像数据的拼接位置对齐;
    调整所述目标图像数据和所述非重合区域的图像数据之间的拼接位置的图像特征,使得所述拼接位置的相似度大于或等于预设阈值。
  8. 如权利要求1至3中任一项所述的方法,其特征在于,所述于所述移动终端的预览界面显示所述目标图像数据的步骤之后,还包括:
    识别所述预览界面的预设位置的目标对象;
    在所述预设位置的目标对象不完整时,输出调整提示信息。
  9. 一种拍照方法,其特征在于,所述拍照方法包括:
    控制至少两个所述摄像头采集图像数据;
    识别所述摄像头采集的图像数据的目标对象;
    在所述目标对象不一致时,根据所有所述摄像头采集的图像数据生成所述目标图像数据;
    于所述移动终端的预览界面显示所述目标图像数据。
  10. 如权利要求9所述的方法,其特征在于,所述根据所有所述摄像头采集的图像数据生成所述目标图像数据的步骤包括:
    获取其它摄像头采集的图像数据中与目标摄像头采集的图像数据不重叠的非重叠区;
    将所述非重合区域对齐到所述目标摄像头采集的图像数据的拼接位置;
    调整所述拼接位置的图像特征,使得所述拼接位置的相似度大于或等于预设阈值;
    基于调整后的图像数据生成目标图像数据。
  11. 如权利要求10所述的方法,其特征在于,所述基于调整后的图像数据生成目标图像数据的步骤包括:
    基于调整后的图像数据对所述拼接位置进行融合处理;
    基于处理后的图像数据生成所述目标图像数据。
  12. 一种移动终端,其特征在于,所述移动终端包括:存储器、处理器,可选地,所述存储器上存储有拍照程序,所述拍照程序被所述处理器执行时实现如权利要求1至11中任一项所述的拍照方法的步骤。
  13. 一种可读存储介质,其特征在于,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至11中任一项所述的拍照方法的步骤。
PCT/CN2021/097994 2021-06-02 2021-06-02 拍照方法、移动终端及可读存储介质 WO2022252158A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/097994 WO2022252158A1 (zh) 2021-06-02 2021-06-02 拍照方法、移动终端及可读存储介质
CN202180096946.0A CN117157989A (zh) 2021-06-02 2021-06-02 拍照方法、移动终端及可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/097994 WO2022252158A1 (zh) 2021-06-02 2021-06-02 拍照方法、移动终端及可读存储介质

Publications (1)

Publication Number Publication Date
WO2022252158A1 true WO2022252158A1 (zh) 2022-12-08

Family

ID=84322721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097994 WO2022252158A1 (zh) 2021-06-02 2021-06-02 拍照方法、移动终端及可读存储介质

Country Status (2)

Country Link
CN (1) CN117157989A (zh)
WO (1) WO2022252158A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496208A (zh) * 2023-12-29 2024-02-02 山东朝辉自动化科技有限责任公司 一种实时获取料场内堆料信息的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331504A (zh) * 2016-09-30 2017-01-11 北京小米移动软件有限公司 拍摄方法及装置
CN107172296A (zh) * 2017-06-22 2017-09-15 维沃移动通信有限公司 一种图像拍摄方法及移动终端
JP2018185240A (ja) * 2017-04-26 2018-11-22 大成建設株式会社 位置特定装置
CN110807759A (zh) * 2019-09-16 2020-02-18 幻想动力(上海)文化传播有限公司 照片质量的评价方法及装置、电子设备、可读存储介质
CN112312042A (zh) * 2020-10-30 2021-02-02 维沃移动通信有限公司 显示控制方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331504A (zh) * 2016-09-30 2017-01-11 北京小米移动软件有限公司 拍摄方法及装置
JP2018185240A (ja) * 2017-04-26 2018-11-22 大成建設株式会社 位置特定装置
CN107172296A (zh) * 2017-06-22 2017-09-15 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN110807759A (zh) * 2019-09-16 2020-02-18 幻想动力(上海)文化传播有限公司 照片质量的评价方法及装置、电子设备、可读存储介质
CN112312042A (zh) * 2020-10-30 2021-02-02 维沃移动通信有限公司 显示控制方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496208A (zh) * 2023-12-29 2024-02-02 山东朝辉自动化科技有限责任公司 一种实时获取料场内堆料信息的方法
CN117496208B (zh) * 2023-12-29 2024-03-29 山东朝辉自动化科技有限责任公司 一种实时获取料场内堆料信息的方法

Also Published As

Publication number Publication date
CN117157989A (zh) 2023-12-01

Similar Documents

Publication Publication Date Title
CN107566721B (zh) 一种信息显示方法、终端及计算机可读存储介质
CN108900790B (zh) 视频图像处理方法、移动终端及计算机可读存储介质
CN108322647B (zh) 全景图像拍摄方法、移动终端及计算机可读存储介质
WO2022166765A1 (zh) 图像处理方法、移动终端及存储介质
WO2023005060A1 (zh) 拍摄方法、移动终端及存储介质
WO2022266907A1 (zh) 处理方法、终端设备及存储介质
CN112511741A (zh) 一种图像处理方法、移动终端以及计算机存储介质
CN109739414B (zh) 一种图片处理方法、移动终端、计算机可读存储介质
CN107743198B (zh) 一种拍照方法、终端及存储介质
CN112135060B (zh) 一种对焦处理方法、移动终端以及计算机存储介质
WO2022252158A1 (zh) 拍照方法、移动终端及可读存储介质
WO2024001853A1 (zh) 处理方法、智能终端及存储介质
CN113347372A (zh) 拍摄补光方法、移动终端及可读存储介质
WO2023284218A1 (zh) 摄像控制方法、移动终端及存储介质
CN112532786B (zh) 图像显示方法、终端设备和存储介质
CN114900613A (zh) 控制方法、智能终端及存储介质
CN112532838B (zh) 一种图像处理方法、移动终端以及计算机存储介质
CN112040134B (zh) 微云台拍摄控制方法、设备及计算机可读存储介质
CN114845044A (zh) 图像处理方法、智能终端及存储介质
CN113194227A (zh) 处理方法、移动终端和存储介质
WO2022133967A1 (zh) 拍摄的方法、终端及计算机存储介质
CN107566745B (zh) 一种拍摄方法、终端和计算机可读存储介质
WO2023108443A1 (zh) 图像处理方法、智能终端及存储介质
WO2023108442A1 (zh) 图像处理方法、智能终端及存储介质
CN113840062B (zh) 相机的控制方法、移动终端及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21943525

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE