CN114143456B - Photographing method and device - Google Patents

Photographing method and device Download PDF

Info

Publication number
CN114143456B
CN114143456B CN202111417657.4A CN202111417657A CN114143456B CN 114143456 B CN114143456 B CN 114143456B CN 202111417657 A CN202111417657 A CN 202111417657A CN 114143456 B CN114143456 B CN 114143456B
Authority
CN
China
Prior art keywords
shared data
data object
frame
photographing
mutual exclusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111417657.4A
Other languages
Chinese (zh)
Other versions
CN114143456A (en
Inventor
郭佳良
聂和平
于涛
闫森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202111417657.4A priority Critical patent/CN114143456B/en
Publication of CN114143456A publication Critical patent/CN114143456A/en
Application granted granted Critical
Publication of CN114143456B publication Critical patent/CN114143456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application relates to the technical field of intelligent terminal equipment, and discloses a photographing method and a photographing device, which are used for solving the problem of low photographing efficiency due to long waiting time when photographing by adopting a multi-frame synthesis method in the related art. In the application, image acquisition is performed in response to a photographing instruction, and the intelligent terminal is prompted not to move; acquiring an image with a specified frame number for a multi-frame synthesis method, and prompting that photographing is completed; and synthesizing the images with the specified frame number by adopting the multi-frame synthesis method to obtain and output one frame of image. According to the method, whether the frame receiving stage is finished can be detected, when the frame receiving stage is finished, the frame receiving stage is fed back to the camera application, so that a user can know that the photographing is finished and the equipment can be moved, and therefore the waiting time of the user can be reduced, and the problems of inconvenience in photographing operation and poor user experience caused by the fact that the user waits for too long time are solved.

Description

Photographing method and device
Technical Field
The application relates to the technical field of intelligent terminal equipment, in particular to a photographing method and device.
Background
In order to improve the photographing effect of the smart phone, the camera of the smart phone can collect multiple frames of images, and then the multiple frames of images are processed by adopting a multiple frame synthesis method to obtain an image with a better photographing effect for a user.
However, in the related art, since the multi-frame synthesizing method can cause the user to hold an attitude for a period of time to obtain an image, the photographing efficiency in the related art is low.
Disclosure of Invention
The application aims to provide a photographing method and a photographing device, which are used for solving the problem of low photographing efficiency when photographing by adopting a multi-frame synthesis method in the related art.
In a first aspect, the present application provides a photographing method, the method comprising:
responding to a photographing instruction, collecting images and prompting the mobile intelligent terminal;
acquiring an image with a specified frame number for a multi-frame synthesis method, and prompting that photographing is completed;
and synthesizing the images with the specified frame number by adopting the multi-frame synthesis method to obtain and output one frame of image.
Optionally, determining that the image of the specified frame number is acquired includes:
accessing a shared data object;
and if the shared data object indicates a first acquisition result, determining that the image with the specified frame number is acquired.
Optionally, the method further comprises:
maintaining the shared data object based on:
counting the number of frames of the acquired image based on the photographing instruction;
if the count reaches the specified frame number, updating the shared data object to indicate the first acquisition result;
and if the count does not reach the specified frame number, updating the shared data object to indicate a second acquisition result.
Optionally, before the accessing the shared data object, the method further includes:
determining a locking mutual exclusion lock;
after the accessing the shared data object, the method further comprises:
releasing the mutual exclusion lock.
Optionally, before the updating the shared data object indicates the first acquisition result, the method further includes:
performing locking operation on the mutual exclusion lock of the shared data object;
if the mutual exclusion lock is locked, executing the operation of updating the shared data object to indicate the first acquisition result;
if the mutual exclusion lock is not locked, returning to the step of executing the locking operation of the mutual exclusion lock of the shared data object;
after the updating the shared data object indicates the first acquisition result, the method further includes:
releasing the mutual exclusion lock.
Optionally, if the shared data object is not accessed, creating the shared data object, and setting the shared data object to indicate a second acquisition result;
before the updating the shared data object indicates the first acquisition result, the method further includes:
and if the shared data object is not created, creating the shared data object.
Optionally, if multiple types of multi-frame synthesis methods are supported, multiple frame synthesizers of different types have a one-to-one correspondence with the shared data object.
In a second aspect, the present application provides an intelligent terminal, including a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the photographing method of any of the first aspects.
In a third aspect, the present application also provides a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the photographing method of any of the first aspects.
In a fourth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the photographing method of any of the first aspects.
According to the embodiment of the application, whether the frame receiving stage is finished can be detected, when the frame receiving stage is finished, the frame receiving stage is fed back to the camera application to finish frame receiving, and the user is instructed to complete photographing, so that the problems of inconvenient photographing operation and poor user experience caused by long waiting time of the user can be avoided.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an intelligent terminal according to an embodiment of the present application;
fig. 2 schematically illustrates a software architecture of an intelligent terminal according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating a photographing method according to an embodiment of the present application;
fig. 4a is a schematic diagram illustrating a frame of a photographing method according to an embodiment of the present application;
fig. 4b is a schematic flow chart illustrating a photographing method according to an embodiment of the present application;
fig. 5 is another schematic flow chart of a photographing method according to an embodiment of the present application;
fig. 6 illustrates an interface switching schematic diagram of a photographing method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Wherein the described embodiments are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Also, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, A/B may represent A or B; the text "and/or" is merely an association relation describing the associated object, meaning that there may be three relations, e.g., a and/or B, may represent: the three cases where a exists alone, a and B exist together, and B exists alone, and furthermore, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first", "second" are used in the following description for descriptive purposes only and are not to be construed as implying or implying a relative importance or implying that the number of technical features is indicated. Thus, a feature defining "a first", "a second", or "a second" may include one or more such feature, either explicitly or implicitly, and in the description of embodiments of the application, the meaning of "a plurality" is two or more, unless otherwise indicated.
The multi-frame synthesizing method, such as a super-resolution algorithm, a multi-frame noise reduction algorithm, a multi-frame night scene algorithm and the like, needs to acquire images of the same shooting object for multiple times, obtains multi-frame images of the shooting object, and then utilizes the multi-frame images to carry out synthesizing processing. The processing flow of the multi-frame synthesis method is generally divided into three stages: and (5) in the frame receiving stage, synthesizing and returning the result. The series of images are combined and processed into a high-quality image by taking multiple images and then utilizing different and similar information in the continuous multi-frame images and combining prior knowledge. The core idea of this approach is to trade time bandwidth for spatial resolution. In order to ensure the continuity of frames before and after the multi-frame algorithm input image, when the multi-frame algorithm is determined to be started for photographing, the camera application prompts a user to photograph in a certain mode. When the multi-frame algorithm needs more input graphs or has larger size, longer waiting time of the user is caused, and negative influence is brought to the user experience.
In view of this, the present application provides a photographing method that is easy and convenient to operate and capable of improving photographing efficiency and user experience. In the method, whether the frame receiving stage is finished can be detected, and when the frame receiving stage is finished, the frame receiving stage is fed back to the camera application to finish the frame receiving of the user mobile equipment. Therefore, the problems of inconvenient photographing operation and poor user experience caused by long waiting time of the user can be avoided.
Fig. 1 shows a schematic structure of a terminal 100.
The embodiment will be specifically described below with reference to the terminal 100 as an example. It should be understood that the terminal 100 shown in fig. 1 is only one example, and that the terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A hardware configuration block diagram of the terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the terminal 100 includes: radio Frequency (RF) circuitry 110, memory 120, display unit 130, camera 140, sensor 150, audio circuitry 160, wireless fidelity (Wireless Fidelity, wi-Fi) module 170, processor 180, bluetooth module 181, and power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, and may receive downlink data of the base station and then transmit the downlink data to the processor 180 for processing; uplink data may be sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers, diplexers, and the like.
Memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal 100 and data processing by running software programs or data stored in the memory 120. Memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The memory 120 stores an operating system that enables the terminal 100 to operate. The memory 120 of the present application may store an operating system and various application programs, and may also store program codes for performing the methods of the embodiments of the present application.
The display unit 130 may be used to receive input digital or character information, generate signal inputs related to user settings and function control of the terminal 100, and in particular, the display unit 130 may include a touch screen 131 provided at the front of the terminal 100, and may collect touch operations on or near the user, such as clicking buttons, dragging scroll boxes, and the like.
The display unit 130 may also be used to display information input by a user or information provided to the user and a graphical user interface (graphical user interface, GUI) of various menus of the terminal 100. In particular, the display unit 130 may include a display 132 disposed on the front of the terminal 100. The display 132 may be configured in the form of a liquid crystal display, light emitting diodes, or the like. The display unit 130 may be used to display the captured preview image and the image synthesized using the multi-frame synthesizing method described in the present application. The display unit 130 may also prompt the user that photographing is completed.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement input and output functions of the terminal 100, and after integration, the touch screen may be simply referred to as a touch display screen. The display unit 130 may display the application program and the corresponding operation steps in the present application.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the processor 180 for conversion into a digital image signal.
The terminal 100 may further include at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal 100 may also be configured with other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, light sensors, motion sensors, and the like.
Audio circuitry 160, speaker 161, microphone 162 can provide an audio interface between the user and terminal 100. The audio circuit 160 may transmit the received electrical signal converted from audio data to the speaker 161, and the speaker 161 converts the electrical signal into a sound signal and outputs the sound signal. The terminal 100 may also be configured with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and converted into audio data, which is output to the RF circuit 110 for transmission to, for example, another terminal, or to the memory 120 for further processing.
Wi-Fi belongs to a short-range wireless transmission technology, and the terminal 100 can help a user to send and receive e-mail, browse web pages, access streaming media and the like through the Wi-Fi module 170, so that wireless broadband internet access is provided for the user.
The processor 180 is a control center of the terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, the processor 180 may include one or more processing units; the processor 180 may also integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., and a baseband processor that primarily handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. The processor 180 of the present application may run an operating system, an application program, a user interface display and a touch response, and a processing method according to the embodiments of the present application. In addition, the processor 180 is coupled with the display unit 130.
The bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) also provided with a bluetooth module through the bluetooth module 181, thereby performing data interaction.
The terminal 100 also includes a power supply 190 (e.g., a battery) that provides power to the various components. The power supply may be logically connected to the processor 180 through a power management system, so that functions of managing charge, discharge, power consumption, etc. are implemented through the power management system. The terminal 100 may also be configured with power buttons for powering on and off the terminal, and for locking the screen, etc.
Fig. 2 is a software configuration block diagram of the terminal 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen, display relevant information of finishing the frame receiving, and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, short messages, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a short message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the terminal 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information (e.g., message digest of short message, message content) in a status bar, can be used to convey notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal vibrates, and an indicator light blinks.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
A 2D (one way of animation) graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In order to simplify the photographing operation flow and shorten the waiting time of photographing, the embodiment of the application provides a photographing method. As shown in fig. 3, the flow chart of the method comprises the following steps:
in step 301, image acquisition is performed in response to a photographing instruction.
In the implementation, in order to improve the image quality, the photographing instruction can be responded to prompt that the intelligent terminal is not moved. Thus, the user can take a picture in a correct posture according to the prompt.
Because the multi-frame synthesizing method needs to utilize multi-frame images to improve the shooting quality of the shooting object, the user should keep the posture unchanged when shooting, so as to be convenient for carrying out multiple image acquisition on the shooting object. In order to improve the user experience and shorten the holding time of the photographing posture, in step 302, an image with a specified frame number for a multi-frame synthesis method is acquired, and photographing is prompted to be completed;
after the prompt photographing is completed, the user can move the terminal equipment, so that the waiting time of the user is shortened, and the problem of unchanged operation caused by long-time maintenance of one gesture of the user is solved.
In step 303, the multi-frame synthesis method is used to synthesize the images with the specified frame number, so as to obtain and output a frame of image.
In the embodiment of the application, different threads can be adopted to finish different operation processes, for example, a multi-frame synthesis algorithm is responsible for collecting multi-frame images to execute a multi-frame synthesis method, and a preview thread is responsible for inquiring the state of a frame receiving node and notifying a camera APP after determining that the frame receiving is finished. The camera APP is responsible for interaction with a user and displays the final shooting effect. The overall process flow framework of this embodiment of the application can be summarized as shown in fig. 4 a: the multi-frame synthesis algorithm thread stores the acquired multi-frame image data into the buffer, synchronously judges whether the frame receiving stage is finished, and if the frame receiving stage is finished, the preview thread informs the camera APP, so that the camera APP can prompt a user that photographing is finished, the multi-frame synthesis algorithm thread synthesizes the multi-frame images acquired by the frame receiving node, and outputs the high-quality images to the camera APP for display.
In order to facilitate informing the camera of the end of the APP frame reception, in the embodiment of the present application, a shared data object is set to indicate whether the multi-frame synthesis phase is completed. In the embodiment of the application, the shared data object may have two values, one of which is a first acquisition result and is used for expressing the end of frame receiving, and the other of which is a second acquisition result and is used for indicating that the frame receiving stage has not ended yet.
When the method is implemented, the number of acquired image frames can be counted by a multi-frame synthesis algorithm flow, and if the count reaches the specified number of frames, the shared data object is updated to indicate the first acquisition result (namely, the end of frame receiving is indicated); if the count does not reach the specified number of frames, the shared data object is updated to indicate a second acquisition result (i.e., to indicate that the receiving node is still in progress). Therefore, the state of the frame receiving stage can be maintained conveniently by sharing the data object through the multi-frame synthesis algorithm flow. And meanwhile, the camera APP is convenient to know whether the frame receiving stage is finished.
For example, access to the shared data object may be implemented as a preview thread; and if the shared data object indicates a first acquisition result, determining that the image with the specified frame number is acquired. For example, the multi-frame composition algorithm flow and the preview thread may use parameters of the same shared data object to perform read and write operations on the shared data object. The same shared data object may be provided to both threads for operation.
For example, as shown in fig. 4b, the multi-frame algorithm process acquires the shared data object at the beginning of the first frame process, sets the value thereof to indicate that the frame receiving phase has not ended, updates the shared data object after the frame receiving phase is ended, and configures the value thereof to be the end of the frame receiving phase. Before the frame is received, if the preview flow accesses the shared data object, the preview flow can know that the frame is not received yet according to the value of the preview flow.
The preview flow can periodically poll the shared data object, and after the frame receiving is finished, the preview flow can acquire the frame receiving end based on the shared data object, and can inform the camera APP of the frame receiving end through the system state Metadata, so that the camera APP can prompt a user to shoot to be finished. Thereafter, the multi-frame composition algorithm flow may reset the state of the shared data object to a default value. The default value may be defined as not ending the frame or as not being enabled. The shared data object may be released when exiting the preview flow or the multi-frame algorithm flow. The shared data object is created again waiting for the next photographing.
When in implementation, the multi-frame synthesis algorithm flow and the preview flow share the same parameter of the shared data object, so that the two flows can share the same shared data object. And adding an updating flow of the shared data object in the multi-frame synthesis algorithm flow, for example, counting the acquired frame number, judging whether the frame receiving is finished, and updating the shared data object based on a judging result. In addition, the access release flow of the shared data object is added in the multi-frame algorithm flow so as to release resources in time.
In order to facilitate the preview thread to acquire the status of the frame receiving stage, in the embodiment of the present application, a corresponding mutual exclusion lock may be set for the shared data object. Whether accessing or overwriting a shared data object requires locking the mutex lock to perform an access operation or a overwrite operation.
For example, before the preview thread accesses the shared data object, the mutex lock is locked, and if the lock failure indicates that the thread of the multi-frame composition algorithm is operating on the shared data object, the thread can access the shared data object later to obtain the latest result of the frame receiving stage. If the locking is successful, the shared data object can be accessed to know whether the frame receiving stage is finished. After access, the preview thread releases the mutex lock so that the multi-frame composition algorithm flow can update the shared data object.
Similarly, when the multi-frame synthesis algorithm thread operates the shared data object, the mutual exclusion lock needs to be locked first, if the locking fails, the preview thread is reading the shared data object, and the shared data object can be updated later after the preview thread releases the mutual exclusion lock. After the multi-frame composition algorithm thread updates the shared data object, the mutual exclusion lock is released in time so that the preview thread can access the shared data object.
When the method is implemented, the identification of the shared data object required by the updating operation of the shared data object in the multi-frame synthesis algorithm thread is the same as the identification of the shared data object required by the access operation of the preview thread, so that the operation of the same shared data object is realized.
In other embodiments, to avoid repeated creation of the shared data object, the multi-frame composition algorithm thread and the preview thread may use predefined parameters of the shared data object to create the shared data object with whom the thread operated first. For example, when the preview thread accesses the shared data object first, if the shared data object is not accessed, the shared data object may be created based on predefined parameters first, and then the mutex lock may be released. The multi-frame composition algorithm flow then updates the shared data object with the same predefined parameters.
Conversely, when the multi-frame composition algorithm flow updates the shared data object first, the multi-frame composition algorithm flow creates the shared data object based on the predefined parameters and updates, and then releases the mutex lock. The preview thread may then access the shared data object using the same predefined parameters.
In the embodiment of the application, when the intelligent terminal supports multiple types of multi-frame synthesis methods, each multi-frame synthesis method can independently correspond to one shared data object, namely, different types of multi-frame synthesis methods and shared data objects have a one-to-one correspondence, so that the threads of the multi-frame synthesis methods of the same type and the threads of the previewing threads can be ensured to accurately access the corresponding shared data objects.
Fig. 5 is a schematic flow chart of a photographing method according to an embodiment of the present application, including the following steps:
in step 501, the camera APP presents a camera APP operation interface, such as that shown in fig. 6, including a photographing control.
In step 502, if the user clicks the photographing control to trigger photographing by the multi-frame synthesizing method, the multi-frame synthesizing algorithm thread responds to the photographing instruction to collect the image, and meanwhile, as shown in fig. 6, the camera APP prompts the user not to move the mobile phone.
In step 503, the multi-frame composition algorithm thread counts the acquired images, and locks the mutual exclusion lock of the shared data object when the count is up to a specified number of frames, i.e., the frame receiving phase is completed.
In implementation, the thread of the multi-frame synthesis algorithm can update the shared data object periodically, or update the shared data object after determining that the frame is received.
In step 504, if the multi-frame synthesis algorithm thread locks the mutex successfully, the shared data object is updated to indicate the first acquisition result, and then the mutex is released.
In step 505, the preview thread locks the mutex lock and accesses the shared data object if the lock is successful.
In step 506, if the access to the shared data object indicates the first acquisition result, the preview thread notifies the camera APP to prompt the user that the photographing has ended. At this point the user may move the handset.
In step 507, the multiple synthesis algorithm threads synthesize one frame of image by using a multi-frame synthesis algorithm, and then the synthesized image is presented by the camera APP. The camera APP as shown in fig. 6 shows a synthesized high quality image.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
The above-provided detailed description is merely a few examples under the general inventive concept and does not limit the scope of the present application. Any other embodiments which are extended according to the solution of the application without inventive effort fall within the scope of protection of the application for a person skilled in the art.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. A photographing method, the method comprising:
responding to a photographing instruction, collecting images and prompting the mobile intelligent terminal;
accessing a shared data object, if the shared data object indicates a first acquisition result, determining that an image with a specified frame number is acquired, and prompting that photographing is completed;
synthesizing the images with the specified frame number by adopting a multi-frame synthesis method to obtain and output a frame of image;
wherein the shared data object is maintained based on the following method:
counting the number of frames of the acquired image based on the photographing instruction;
performing locking operation on the mutual exclusion lock of the shared data object;
if the mutual exclusion lock is locked and the count reaches the specified frame number, the operation of updating the shared data object to indicate the first acquisition result is executed, and if the mutual exclusion lock is locked and the count does not reach the specified frame number, the operation of updating the shared data object to indicate the second acquisition result is executed; releasing the mutual exclusion lock after the operation of updating the shared data object to indicate the first/second acquisition result is executed;
and if the mutual exclusion lock is not locked, returning to the step of executing the locking operation of the mutual exclusion lock of the shared data object.
2. The method of claim 1, wherein prior to said accessing the shared data object, the method further comprises:
determining a locking mutual exclusion lock;
after the accessing the shared data object, the method further comprises:
releasing the mutual exclusion lock.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
if the shared data object is not accessed, creating the shared data object, and setting the shared data object to indicate a second acquisition result;
before the updating the shared data object indicates the first acquisition result, the method further includes:
and if the shared data object is not created, creating the shared data object.
4. The method of claim 1, wherein if multiple types of multi-frame synthesizing methods are supported, different types of multi-frame synthesizers and shared data objects have a one-to-one correspondence.
5. An intelligent terminal, characterized by comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the photographing method of any of claims 1-4.
6. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the photographing method of any of claims 1-4.
CN202111417657.4A 2021-11-26 2021-11-26 Photographing method and device Active CN114143456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111417657.4A CN114143456B (en) 2021-11-26 2021-11-26 Photographing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111417657.4A CN114143456B (en) 2021-11-26 2021-11-26 Photographing method and device

Publications (2)

Publication Number Publication Date
CN114143456A CN114143456A (en) 2022-03-04
CN114143456B true CN114143456B (en) 2023-10-20

Family

ID=80387791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111417657.4A Active CN114143456B (en) 2021-11-26 2021-11-26 Photographing method and device

Country Status (1)

Country Link
CN (1) CN114143456B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007006272A (en) * 2005-06-24 2007-01-11 Nikon Corp Imaging apparatus
CN108777767A (en) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110049244A (en) * 2019-04-22 2019-07-23 惠州Tcl移动通信有限公司 Image pickup method, device, storage medium and electronic equipment
CN111510626A (en) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 Image synthesis method and related device
CN113329176A (en) * 2021-05-25 2021-08-31 海信电子科技(深圳)有限公司 Image processing method and related device applied to camera of intelligent terminal
WO2021223500A1 (en) * 2020-05-07 2021-11-11 华为技术有限公司 Photographing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100122253A1 (en) * 2008-11-09 2010-05-13 Mccart Perry Benjamin System, method and computer program product for programming a concurrent software application
US9152474B2 (en) * 2014-01-20 2015-10-06 Netapp, Inc. Context aware synchronization using context and input parameter objects associated with a mutual exclusion lock
KR20160020860A (en) * 2014-08-14 2016-02-24 엘지전자 주식회사 Mobile terminal and method for controlling the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007006272A (en) * 2005-06-24 2007-01-11 Nikon Corp Imaging apparatus
CN108777767A (en) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN109951633A (en) * 2019-02-18 2019-06-28 华为技术有限公司 A kind of method and electronic equipment shooting the moon
CN110049244A (en) * 2019-04-22 2019-07-23 惠州Tcl移动通信有限公司 Image pickup method, device, storage medium and electronic equipment
CN111510626A (en) * 2020-04-21 2020-08-07 Oppo广东移动通信有限公司 Image synthesis method and related device
WO2021223500A1 (en) * 2020-05-07 2021-11-11 华为技术有限公司 Photographing method and device
CN113329176A (en) * 2021-05-25 2021-08-31 海信电子科技(深圳)有限公司 Image processing method and related device applied to camera of intelligent terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
可见光夜景图像的像质提升技术;杨晨炜;《优秀博士论文》;全文 *

Also Published As

Publication number Publication date
CN114143456A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
WO2020224485A1 (en) Screen capture method and electronic device
CN115473957B (en) Image processing method and electronic equipment
CN111597000B (en) Small window management method and terminal
CN113329176A (en) Image processing method and related device applied to camera of intelligent terminal
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN113038141B (en) Video frame processing method and electronic equipment
CN113055585B (en) Thumbnail display method of shooting interface and mobile terminal
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN115460355B (en) Image acquisition method and device
CN114143456B (en) Photographing method and device
CN113254132B (en) Application display method and related device
WO2021204103A1 (en) Picture preview method, electronic device, and storage medium
CN113542711A (en) Image display method and terminal
CN113507614A (en) Video playing progress adjusting method and display equipment
CN113179362B (en) Electronic device and image display method thereof
CN115334239B (en) Front camera and rear camera photographing fusion method, terminal equipment and storage medium
CN116055857B (en) Photographing method and electronic equipment
CN113129238B (en) Photographing terminal and image correction method
CN113641533B (en) Terminal and short message processing method
CN111479075B (en) Photographing terminal and image processing method thereof
CN111988530B (en) Mobile terminal and photographing method thereof
CN113642010B (en) Method for acquiring data of extended storage device and mobile terminal
CN111142648B (en) Data processing method and intelligent terminal
WO2022206600A1 (en) Screen projection method and system, and related apparatus
CN112000411B (en) Mobile terminal and display method of recording channel occupation information thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230828

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 9 / F, Hisense south building, 1777 Chuangye Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518054

Applicant before: HISENSE ELECTRONIC TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant