WO2023225949A1 - 一种拍摄预览方法、装置、终端设备及存储介质 - Google Patents

一种拍摄预览方法、装置、终端设备及存储介质 Download PDF

Info

Publication number
WO2023225949A1
WO2023225949A1 PCT/CN2022/095274 CN2022095274W WO2023225949A1 WO 2023225949 A1 WO2023225949 A1 WO 2023225949A1 CN 2022095274 W CN2022095274 W CN 2022095274W WO 2023225949 A1 WO2023225949 A1 WO 2023225949A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
canvas
target image
cache space
target
Prior art date
Application number
PCT/CN2022/095274
Other languages
English (en)
French (fr)
Inventor
王国森
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/095274 priority Critical patent/WO2023225949A1/zh
Priority to CN202280004626.2A priority patent/CN117616747A/zh
Publication of WO2023225949A1 publication Critical patent/WO2023225949A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Definitions

  • the present disclosure relates to the field of terminals, and in particular, to a shooting preview method, device, terminal equipment and storage medium.
  • Multi-screen terminal equipment can provide users with more diverse display content. For example, in a shooting scene, each display screen of a multi-screen terminal equipment can simultaneously display the shooting preview screen to improve the user experience.
  • the power consumption of the terminal equipment is relatively large and the heating phenomenon is serious.
  • the present disclosure provides a shooting preview method, device, terminal equipment and storage medium.
  • a shooting preview method is proposed, which is applied to a terminal device.
  • the method includes:
  • the target image data is drawn on multiple canvas objects respectively, so that multiple display screens respectively display preview images represented by the target image data; wherein, between the multiple canvas objects and the multiple display screens There is a one-to-one correspondence.
  • obtaining raw image data collected by a camera includes:
  • the original image data is obtained through the data receiving object.
  • the calling image processing thread performs effect processing on the original image data to obtain target image data, including:
  • the cache space of the image processing thread includes a first cache space and a second cache space
  • the original image data is sequentially subjected to image processing with multiple effects according to a preset processing sequence, including:
  • the target cache space is the first cache space or the second cache space.
  • drawing the target image data on multiple canvas objects respectively includes:
  • the target cache space is used to cache the target image data
  • the target image data is drawn on each canvas object respectively.
  • the method further includes:
  • the image processing thread is created, and/or a data receiving object is created.
  • the method when the photography application is started, the method further includes:
  • a shooting preview device which is applied to terminal equipment.
  • the device includes:
  • the acquisition module is used to obtain the original image data collected by the camera
  • the calling module is used to call the image processing thread to perform effect processing on the original image data to obtain the target image data;
  • a drawing module for drawing the target image data on multiple canvas objects respectively, so that multiple display screens can respectively display preview images corresponding to the target image data; wherein the multiple canvas objects and the multiple canvas objects There is a one-to-one correspondence between the display screens.
  • a terminal device including:
  • the processor is configured to execute the shooting preview method as described in any one of the above.
  • a non-transitory computer-readable storage medium which when instructions in the storage medium are executed by a processor of a terminal device, enables the terminal device to perform any of the above The shooting preview method described above.
  • the image processing thread is used to perform unified effect processing on the original image data, and then the target image data obtained by the effect processing is displayed on multiple display screens. This reduces the power consumption of the terminal device, improves the problem of high power consumption of the terminal device caused by multi-threaded synchronization operations, and thereby reduces the heating and freezing of the terminal device.
  • Figure 1 is a flowchart of a method according to an exemplary embodiment.
  • Figure 2 is a flowchart of a method according to an exemplary embodiment.
  • Figure 3 is a flowchart of a method according to an exemplary embodiment.
  • Figure 4 is a schematic diagram of processing nodes of a method according to an exemplary embodiment.
  • Figure 5 is a block diagram of a device according to an exemplary embodiment.
  • FIG. 6 is a block diagram of a terminal device according to an exemplary embodiment.
  • multi-screen terminal devices with related technologies generally use GLSurfaceView (a UI control provided by Android to developers) to implement multiple display screens to display preview images.
  • GLSurfaceView a UI control provided by Android to developers
  • GLSurfaceView is used as the screen canvas corresponding to the display screen, and N GLSurfaceViews need to be created for N display screens.
  • each GLSurfaceView control includes an OpenGL ES (OpenGL for Embedded Systems, an open graphics library for embedded systems) thread inside.
  • OpenGL ES OpenGL for Embedded Systems, an open graphics library for embedded systems
  • Each GLSurfaceView control needs to independently perform effect processing (such as beautification) on the original image data. That is, N display screens need to perform N times of effect processing respectively. After all GLSurfaceView control effects are processed, multiple screens can display preview images simultaneously.
  • embodiments of the present disclosure propose a shooting preview method, which is applied to a terminal device.
  • the method includes: obtaining original image data collected by a camera. Call the image processing thread to perform effect processing on the original image data to obtain the target image data.
  • the target image data is drawn on multiple canvas objects respectively, so that multiple display screens respectively display preview images represented by the target image data; wherein, there is a one-to-one correspondence between the multiple canvas objects and the multiple display screens.
  • the image processing thread is used to perform unified effect processing on the original image data, and then the target image data obtained by the effect processing is displayed on multiple display screens. This reduces the power consumption of the terminal device, improves the problem of high power consumption of the terminal device caused by multi-threaded synchronization operations, and thereby reduces the heating and freezing of the terminal device.
  • the shooting preview method in this embodiment is applied to a terminal device.
  • the terminal device may be an electronic device such as a mobile phone, a tablet computer, a notebook computer, a smart screen or a smart wearable device, and the electronic device has at least two display screens.
  • the method in this embodiment may include the following steps:
  • S120 Call the image processing thread to perform effect processing on the original image data to obtain the target image data.
  • the terminal device in this embodiment takes a terminal device with an Android operating system as an example.
  • the method of this embodiment is applicable to shooting scenarios, and the method can be executed by the application layer in the system framework of the terminal device or by the processor of the terminal device.
  • step S110 in the shooting scene, the shooting application of the terminal device is opened, and the camera hardware of the terminal device can collect image data within the viewing range in real time.
  • the application layer can obtain the original image data collected by the camera.
  • this step may include the following steps:
  • the data receiving object such as SurfaceTexture can be created when starting the shooting application, that is, as one of the startup parameters of the shooting application.
  • the default interface can be the data calling interface of SurfaceTexture, such as setting a callback function. SurfaceTexture can be used to monitor whether there is original image data, that is, to monitor whether there is a data callback event.
  • the application layer can obtain the original image data collected by the camera through SurfaceTexture.
  • the image processing thread may be a thread that initializes an image processing application program interface (API) set.
  • API image processing application program interface
  • OpenGL ES OpenGL for Embedded Systems, open graphics library for embedded systems
  • Vulkan cross-platform 2D and 3D drawing application programming interface
  • the image processing thread is an OpenGL ES thread as an example.
  • the thread is created when the shooting application is started, and the OpenGL ES API is initialized in the thread.
  • the image processor GPU
  • the image processor can be used to perform effect processing on the original image data through the OpenGL ES thread, such as performing various image algorithm operations to achieve image effects such as beautification and filters.
  • the target image data can be obtained.
  • the target image data can represent the preview screen to be displayed.
  • each canvas object can correspond to one display screen.
  • the canvas object refers to the interface layer between the image processing thread and the local window system. It can be used as a bridge between the image processing thread and the display screen. It is used to provide a drawing surface for the image processing thread so that the content drawn by the image processing thread can be displayed on the corresponding display.
  • the canvas object can be EGLSurface (Embedded Graphics Library Surface, embedded graphics frame window), or VkSurfaceKHR, an abstract type of surface that belongs to the Vulkan extension.
  • the canvas object is EGLSurface as an example.
  • EGLSurface is an intermediate interface layer between the API of OpenGL ES and the local window system. It can be used as a bridge between OpenGL ES and the display screen, providing a drawing surface for OpenGL ES so that the content drawn by OpenGL ES can be displayed accordingly. on the display.
  • the target image data obtained after unified effect processing by the OpenGL ES thread is drawn on each canvas object respectively, so that multiple displays can display preview images at the same time, which can not only improve the user experience, but also reduce the cost The power consumption of terminal equipment when displaying multiple screens.
  • the method of this embodiment may include steps S110 to S130 as shown in Figure 1 .
  • step S120 in this embodiment may include the following steps:
  • various effects include, for example, various types of image effects such as beautification, filters, blurring, and watermarks.
  • the preset processing sequence may be a processing sequence preconfigured by the terminal device. Multiple image processing algorithms (such as beauty, filters, blur, watermark, etc.) are linked in sequence according to the preset processing order, and each image processing algorithm can be regarded as a node. An image processing algorithm (such as beautification) can be executed at each node to obtain the image effect that this node can achieve. After all the image effects selected by the user are processed, the target image data is obtained. It is understandable that, based on the user's choice, only some types of image effects may be used during a shooting process, but they will still be transferred sequentially according to the node sequence corresponding to the preset processing sequence.
  • the image processing thread still takes the OpenGL ES thread as an example.
  • the object that caches image data in OpenGL ES is the cache space (FrameBuffer), and the FrameBuffer includes FrameBuffer A and FrameBuffer B.
  • the cache space of the image processing thread includes a first cache space and a second cache space.
  • the first cache space may refer to one of FrameBuffer A and FrameBuffer B
  • the second cache space refers to one of FrameBuffer A and FrameBuffer B. of another.
  • step S1201 may include the following steps:
  • the current effect can refer to any one of multiple effects.
  • the current effect is the first image effect that needs to be processed on the original image data, or the current effect is after the original image data has been processed by at least one effect.
  • the desired image effect At the current node corresponding to the current effect, image processing is performed using the image processing algorithm of the current node to obtain the current image data.
  • the current effect can be the first image effect: beautification.
  • the original image data is processed by the current node to obtain the image data of the beautification effect.
  • the image data of the beautification effect is cached in the first cache space.
  • the first buffer space is FrameBuffer A.
  • next-level effect refers to: the next effect adjacent to the current effect among the multiple effects distributed according to the preset processing order.
  • the node corresponding to the next-level effect of the current effect is recorded as the next-level node.
  • the next-level node can obtain the current image data from the first cache space and perform next-level effect processing on the current image data.
  • the current effect can be beauty, and the next-level effect can be filters.
  • the original image data is processed by the current node to obtain the current image data of the beautification effect.
  • the current image data is processed by the next-level effect to obtain the image data of the filter effect. At this time, the image data obtained has been superimposed on both the beautification and filter effects. kind of effect.
  • the image data processed by the next-level effect into the second cache space.
  • the image data with the current effect and the next-level effect superimposed on the original image data is cached in the second cache space.
  • the second buffer space is FrameBuffer B.
  • the processing method in this example can be applied to the image processing of any two adjacent effect nodes.
  • the image data obtained by the adjacent nodes are cross-buffered between FrameBuffer A and FrameBuffer B, that is, ping-pong is used. Processing method until all effects are processed to obtain the target image data.
  • the target image data is image data after all effect processing, that is, all user-selected effects have been superimposed on the original image data.
  • the target cache space is the cache space where the target image data is located.
  • the target cache space is the first cache space or the second cache space.
  • the last effect among the multiple effects is makeup, and the image data processed by the last effect is the target image data.
  • the target image data is cached in the first cache space (at this time, the first cache space is FrameBuffer A), so the target cache space refers to the first cache space.
  • OpenGL ES can draw the target image data on the canvas object EGLSurface corresponding to each display screen.
  • the display screens of the terminal device are display screen 1, display screen 2 and display screen 3 respectively.
  • the target image data are drawn on the canvas object EGLSurface1 corresponding to display screen 1, the canvas object EGLSurface2 corresponding to display screen 2 and
  • display screen 1, display screen 2 and display screen 3 can display preview images simultaneously.
  • multiple effects are processed uniformly in the image processing thread, which is beneficial to reducing the power consumption of the terminal device, improving the efficiency of effect processing, improving the heating phenomenon, and providing users with smoother and smoother effects.
  • the method of this embodiment may include steps S110 to S130 as shown in Figure 1 .
  • step S130 in this embodiment may include the following steps:
  • step S1301 the target cache space is used to cache target image data. Please refer to the description in the corresponding embodiment of FIG. 2 .
  • the image data processed by all effects can be obtained from the target cache space.
  • each canvas object is bound to a corresponding screen canvas.
  • the screen canvas is a canvas in the Android operating system framework of the terminal device and has a corresponding relationship with the display screen.
  • the terminal device can pre-set a corresponding screen canvas for each display screen.
  • the screen canvas manages a Surface (local window) internally, so that the display screen corresponding to this Surface can be displayed by drawing on the Surface.
  • the screen canvas can be, for example, SurfaceView or TextureView.
  • Each display screen corresponds to a SurfaceView or TextureView, and each SurfaceView or TextureView internally manages a Surface.
  • the canvas object such as EGLSurface has a binding relationship with SurfaceView or TextureView, that is, each display screen has a corresponding canvas object EGLSurface and a corresponding SurfaceView or TextureView.
  • step S1303 after determining the canvas object EGLSurface corresponding to each display screen, the target image data can be drawn on the canvas object EGLSurface, thereby realizing display of the corresponding display screen.
  • multiple displays can display preview images simultaneously.
  • the method of this embodiment may also include the following steps:
  • an initialization interface may be provided at the application layer, and the initialization interface is used to initialize objects required for the shooting preview method in the embodiment of the present disclosure.
  • the method of this embodiment may further include the following steps:
  • step S102 and step S103 in this embodiment are initialization operations of the terminal device, which are performed before step S110 is performed.
  • the order between step S101, step S102 and step S103 is for reference only and is not limiting.
  • a corresponding screen canvas such as a screen canvas SurfaceView or a screen canvas TextureView is set.
  • the SurfaceView/TextureView setting interface can be set at the application layer. By calling this setting interface, the corresponding screen canvas SurfaceView or screen canvas TextureView is set for each display screen. It is understandable that each time the setting interface is called to set the corresponding screen canvas for one display screen, the setting interface needs to be called multiple times for multiple display screens until the screen canvas corresponding to each display screen is set.
  • step S103 after setting the screen canvas SurfaceView or screen canvas TextureView corresponding to each display screen, the screen canvas corresponding to each display screen can be bound to the canvas object EGLSurface corresponding to the display screen, so that in OpenGL ES When the canvas object EGLSurface is drawn, the display screen corresponding to the canvas object can display the drawn content.
  • the rendering framework involved in the shooting preview method of this disclosed embodiment mainly includes the three main objects described above: data receiving objects such as SurfaceTexture, image processing threads such as OpenGL ES threads, and canvas objects such as EGLSurface.
  • data receiving objects such as SurfaceTexture
  • image processing threads such as OpenGL ES threads
  • canvas objects such as EGLSurface.
  • the screen canvas SurfaceView or the screen canvas TextureView is associated with the rendering framework in this embodiment.
  • the application layer or processor or the rendering framework may be used to perform steps S110 to S130 involved in the aforementioned embodiments.
  • SurfaceTexture can monitor image data.
  • the default interface is the internal interface of the rendering framework.
  • a data callback event can occur.
  • SurfaceTexture can obtain the original image data.
  • the OpenGL ES thread performs unified effect processing on the original image data, and draws the obtained target image data on the EGLSurface to achieve the effect of multi-display preview in the shooting scene. It can not only improve the user experience, but also avoid multi-threaded effect processing operations, effectively save the power consumption of terminal equipment and improve the heating phenomenon.
  • the embodiment of the present disclosure also provides a shooting preview device, which is applied to a terminal device.
  • the device of this embodiment may include: an acquisition module 110, a calling module 120 and a drawing module 130.
  • the device of this embodiment is used to implement the method shown in Figures 1 to 3.
  • the acquisition module 110 is used to acquire the original image data collected by the camera.
  • the calling module 120 is used to call the image processing thread to perform effect processing on the original image data to obtain the target image data.
  • the drawing module 130 is used to draw the target image data on multiple canvas objects respectively, so that the multiple display screens respectively display preview images corresponding to the target image data; wherein, there are one-to-one connections between the multiple canvas objects and the multiple display screens. Correspondence.
  • Figure 6 shows a block diagram of a terminal device.
  • the device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • Device 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communications component 616 .
  • Processing component 602 generally controls the overall operation of device 600, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 602 may include one or more processors 620 to execute instructions to complete all or part of the steps of the above method.
  • processing component 602 may include one or more modules that facilitate interaction between processing component 602 and other components.
  • processing component 602 may include a multimedia module to facilitate interaction between multimedia component 608 and processing component 602.
  • Memory 604 is configured to store various types of data to support operations at device 600 . Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 604 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power component 606 provides power to various components of device 600.
  • Power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 600 .
  • Multimedia component 608 includes a screen that provides an output interface between device 600 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. A touch sensor can not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • multimedia component 608 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 610 is configured to output and/or input audio signals.
  • audio component 610 includes a microphone (MIC) configured to receive external audio signals when device 600 is in operating modes, such as call mode, recording mode, and speech recognition mode. The received audio signals may be further stored in memory 604 or sent via communications component 616 .
  • audio component 610 also includes a speaker for outputting audio signals.
  • the I/O interface 612 provides an interface between the processing component 602 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 614 includes one or more sensors that provide various aspects of status assessment for device 600 .
  • the sensor component 614 can detect the open/closed state of the device 600, the relative positioning of components, such as the display and keypad of the device 600, the sensor component 614 can also detect the position change of the device 600 or a component of the device 600, the user The presence or absence of contact with device 600, device 600 orientation or acceleration/deceleration and temperature changes of device 600.
  • Sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 616 is configured to facilitate wired or wireless communications between device 600 and other devices.
  • Device 600 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • communications component 616 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 600 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • a non-transitory computer-readable storage medium provided in another exemplary embodiment of the present disclosure can be executed by the processor 620 of the device 600 to complete the above method.
  • computer-readable storage media may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • RAM random access memory
  • CD-ROM compact disc-read only memory
  • magnetic tape magnetic tape
  • floppy disk magnetic tape
  • optical data storage device etc.
  • the image processing thread is used to perform unified effect processing on the original image data, and then the target image data obtained by the effect processing is displayed on multiple display screens. This reduces the power consumption of the terminal device, improves the problem of high power consumption of the terminal device caused by multi-threaded synchronization operations, and thereby reduces the heating and freezing of the terminal device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种拍摄预览方法、装置、终端设备及存储介质,方法包括:获取相机采集的原始图像数据(S110);调用图像处理线程对原始图像数据进行效果处理,获得目标图像数据(S120);将目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示目标图像数据表征的预览画面(S130);其中,多个画布对象与多个显示屏之间具有一一对应关系。本方法中,在拍摄场景中获取原始图像数据后,利用图像处理线程对原始图像数据进行统一的效果处理,然后将效果处理所获得的目标图像数据显示于多个显示屏上,从而降低终端设备的功耗,改善因多线程同步操作导致的终端设备功耗大的问题,进而减少终端设备的发热及卡顿现象。

Description

一种拍摄预览方法、装置、终端设备及存储介质 技术领域
本公开涉及终端领域,尤其涉及一种拍摄预览方法、装置、终端设备及存储介质。
背景技术
随着技术发展,终端设备的形态趋于多样化,如折叠屏等多屏终端设备。多屏终端设备可以为用户提供更多样化的显示内容,比如在拍摄场景中,多屏终端设备的每个显示屏都可以同步显示拍摄预览画面,以提升用户体验。相关技术中,多屏显示拍摄预览画面的过程中,终端设备的功耗较大,发热现象较为严重。
发明内容
为克服相关技术中存在的问题,本公开提供一种拍摄预览方法、装置、终端设备及存储介质。
根据本公开实施例的第一方面,提出了一种拍摄预览方法,应用于终端设备,方法包括:
获取相机采集的原始图像数据;
调用图像处理线程对所述原始图像数据进行效果处理,获得目标图像数据;
将所述目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示所述目标图像数据表征的预览画面;其中,所述多个画布对象与所述多个显示屏之间具有一一对应关系。
在一些实施方式中,所述获取相机采集的原始图像数据,包括:
响应于预设接口被调用,通过数据接收对象获取所述原始图像数据。
在一些实施方式中,所述调用图像处理线程对所述原始图像数据进行效果处理,获得目标图像数据,包括:
在所述图像处理线程上,按预设处理顺序对所述原始图像数据依次进行多种效果的图像处理,获得目标图像数据;
将所述目标图像数据缓存至目标缓存空间。
在一些实施方式中,所述图像处理线程的缓存空间包括第一缓存空间和第二缓存空间;
所述按预设处理顺序对所述原始图像数据依次进行多种效果的图像处理,包括:
根据所述预设处理顺序,将经过当前效果处理之后的当前图像数据缓存至第一缓存空间;
从所述第一缓存空间获取所述当前图像数据,并对所述当前图像数据进行所述当前效果处理的下一级效果处理;
将经过下一级效果处理的图像数据缓存至第二缓存空间。
在一些实施方式中,所述目标缓存空间为所述第一缓存空间或所述第二缓存空间。
在一些实施方式中,所述将所述目标图像数据分别绘制于多个画布对象上,包括:
从目标缓存空间获取所述目标图像数据;其中,所述目标缓存空间用于缓存所述目标图像数据;
确定每个所述显示屏所分别对应的所述画布对象,其中,每个所述画布对象绑定有对应的屏幕画布;
将所述目标图像数据分别绘制于每个所述画布对象上。
在一些实施方式中,所述方法还包括:
响应于拍摄类应用程序启动,创建所述图像处理线程,和/或,创建数据接收对象。
在一些实施方式中,在拍摄类应用程序启动时,所述方法还包括:
在多个显示屏中,为每个所述显示屏配置对应的屏幕画布;
将每个所述显示屏对应的屏幕画布与每个所述显示屏对应的画布对象绑定。
根据本公开实施例的第二方面,提出了一种拍摄预览装置,应用于终端设备,装置包括:
获取模块,用于获取相机采集的原始图像数据;
调用模块,用于调用图像处理线程对所述原始图像数据进行效果处理,获得目标图像数据;
绘制模块,用于将所述目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示所述目标图像数据对应的预览画面;其中,所述多个画布对象与所述多个显示屏之间具有一一对应关系。
根据本公开实施例的第三方面,提出了一种终端设备,包括:
处理器;
用于存储处理器的可执行指令的存储器;
其中,所述处理器被配置为执行如上任一项所述的拍摄预览方法。
根据本公开实施例的第四方面,提出了一种非临时性计算机可读存储介质,当所述存储介质中的指令由终端设备的处理器执行时,使得终端设备能够执行如上任一项所述的拍摄预览方法。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开的方法中,在拍摄场景中获取原始图像数据后,利用图像处理线程对原始图像数据进行统一的效果处理,然后将效果处理所获得的目标图像数据显示于多个显示屏上。从而降低终端设备的功耗,改善因多线程同步操作导致的终端设备功耗大的问题,进而减少终端设备的发热及卡顿现象。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的方法的流程图。
图2是根据一示例性实施例示出的方法的流程图。
图3是根据一示例性实施例示出的方法的流程图。
图4是根据一示例性实施例示出的方法的处理节点示意图。
图5是根据一示例性实施例示出的装置的框图。
图6是根据一示例性实施例示出的终端设备的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
以安卓(Android)操作系统的设备为例,相关技术的多屏终端设备一般通过 GLSurfaceView(Android提供给开发者的一种UI控件)实现多个显示屏显示预览画面。
为描述相关技术中存在的技术问题,此处首先阐述相关技术的多屏预览方式。
相关技术中的方式中,以GLSurfaceView作为显示屏对应的屏幕画布,N个显示屏需创建N个GLSurfaceView。并且,每个GLSurfaceView控件的内部包括一个OpenGL ES(OpenGL for Embedded Systems,嵌入式系统的开放式图形库)线程,每个GLSurfaceView控件需要分别独立的对原始图像数据进行效果处理(如美颜),即N个显示屏则需要分别执行N次效果处理。在全部GLSurfaceView控件效果处理完毕,令多屏同步显示预览画面。
而在多GLSurfaceView控件分别进行效果处理的过程中,多OpenGL ES线程同时进行任务处理,会大幅增大终端设备的功耗,并导致终端设备产生严重的发热现象。用户所选取叠加的效果越多,终端设备的功耗就越大、发热现象越严重、多屏显示处理过程的效率就越低;严重时还将导致预览过程中出现卡顿,影响用户体验。
为解决相关技术中的问题,本公开实施例提出了一种拍摄预览方法,应用于终端设备,方法包括:获取相机采集的原始图像数据。调用图像处理线程对原始图像数据进行效果处理,获得目标图像数据。将目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示目标图像数据表征的预览画面;其中,多个画布对象与多个显示屏之间具有一一对应关系。本公开的方法中,在拍摄场景中获取原始图像数据后,利用图像处理线程对原始图像数据进行统一的效果处理,然后将效果处理所获得的目标图像数据显示于多个显示屏上。从而降低终端设备的功耗,改善因多线程同步操作导致的终端设备功耗大的问题,进而减少终端设备的发热及卡顿现象。
在一个示例性的实施例中,本实施例中的拍摄预览方法,应用于终端设备。其中,终端设备比如可以是手机、平板电脑、笔记本电脑、智能屏或智能穿戴设备等电子设备,电子设备具有至少两个显示屏。
如图1所示,本实施例的方法可以包括如下步骤:
S110、获取相机采集的原始图像数据。
S120、调用图像处理线程对原始图像数据进行效果处理,获得目标图像数据。
S130、将目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示目标图像数据表征的预览画面。
其中,本实施例中的终端设备以安卓(Android)操作系统的终端设备为例。本实施例的 方法适用于拍摄场景中,方法可由终端设备系统框架中的应用层或者终端设备的处理器执行。
在步骤S110中,在拍摄场景中,打开终端设备的拍摄类应用程序,终端设备的相机硬件可实时采集取景范围内的图像数据。本步骤中,应用层可获取相机所采集的原始图像数据。
在一个示例中,本步骤可以包括如下步骤:
S1101、响应于预设接口被调用,通过数据接收对象获取原始图像数据。此步骤中,数据接收对象如SurfaceTexture可在启动拍摄类应用程序时进行创建,即作为拍摄类应用程序的启动参数之一。预设接口可以是SurfaceTexture的数据调用接口,如设置回调函数。SurfaceTexture可用于监听是否存在原始图像数据,即监听是否有数据回调事件。当预设接口被调用,表明存在原始图像数据,应用层可通过SurfaceTexture获取相机采集的原始图像数据。
在步骤S120中,图像处理线程可以是初始化了图像处理应用程序接口(API)集的线程。比如,初始化了OpenGL ES(OpenGL for Embedded Systems,嵌入式系统的开放式图形库)API集的线程,或者初始化了Vulkan(跨平台的2D和3D绘图应用程序接口)API集的线程,或者初始化了Renderscript(渲染)API集的线程。
本步骤中以图像处理线程为OpenGL ES线程为例进行说明,在拍摄类应用程序启动时进行创建线程,并在线程中初始化OpenGL ES的API。从而可通过OpenGL ES线程利用图像处理器(GPU)对原始图像数据进行效果处理,如进行各类图像算法的运算,以实现美颜、滤镜等图像效果。结合用户所选取的效果类型,在OpenGL ES线程中对原始图像数据执行全部效果处理后,可获得目标图像数据。目标图像数据即可表征待显示的预览画面。
在步骤S130中,多个画布对象与多个显示屏之间具有一一对应关系,即每个画布对象可对应一个显示屏。画布对象是指图像处理线程与本地窗口系统之间的接口层,可作为图像处理线程与显示屏之间的桥梁,用于为图像处理线程提供绘制表面,以便于图像处理线程所绘制的内容能够显示在对应的显示屏上。
画布对象可以是EGLSurface(Embedded Graphics Library Surface,嵌入式图形框架窗口),或者是VkSurfaceKHR,属于Vulkan扩展的surface的一个抽象类型。本步骤中以画布对象是EGLSurface为例进行说明。EGLSurface是OpenGL ES的API与本地窗口系统之间的一个中间接口层,可作为OpenGL ES与显示屏之间的桥梁,为OpenGL ES提供绘制表面,以便于实现将OpenGL ES所绘制的内容对应显示于显示屏上。
本步骤中,将经过OpenGL ES线程统一效果处理后所得到的目标图像数据,分别绘制于每个画布对象上,从而使多个显示屏能够同时显示预览画面,既能够提升用户体验,还可以降低终端设备在多屏显示时的功耗。
在一个示例性的实施例中,本实施例的方法可以包括如图1所示的步骤S110至步骤S130。如图2所示,本实施例中步骤S120可以包括如下步骤:
S1201、在图像处理线程上,按预设处理顺序对原始图像数据依次进行多种效果的图像处理,获得目标图像数据。
S1202、将目标图像数据缓存至目标缓存空间。
其中,在步骤S1201中,多种效果比如包括:美颜、滤镜、虚化、水印等多种类型的图像效果,进行每种效果处理时需要运行相应的图像处理算法来实现。预设处理顺序可以是终端设备预先配置好的处理顺序。多种图像处理算法(如美颜、滤镜、虚化、水印等)按照预设处理顺序依次链接,每个图像处理算法可视为一个节点。在每个节点处可执行一种图像处理算法(如美颜),以获得本节点所能实现的图像效果。当用户所选择的图像效果均处理完毕后,获得目标图像数据。可以理解的,结合用户的选择,在一次拍摄过程中可能仅使用到部分类型的图像效果,但依然会按照预设处理顺序所对应的节点顺序进行依次流转。
本步骤中,图像处理线程仍以OpenGL ES线程为例,OpenGL ES中缓存图像数据的对象为缓存空间(FrameBuffer),FrameBuffer包括FrameBuffer A和FrameBuffer B。
在一个示例中,图像处理线程的缓存空间包括第一缓存空间和第二缓存空间,第一缓存空间可以是指FrameBuffer A和FrameBuffer B中的一个,第二缓存空间是指FrameBuffer A和FrameBuffer B中的另一个。
本示例中,步骤S1201可以包括如下步骤:
S1201-1、根据预设处理顺序,将经过当前效果处理之后的当前图像数据缓存至第一缓存空间。此步骤中,当前效果可以指代多种效果中的任一种,比如当前效果是原始图像数据的所需处理的第一种图像效果,或者当前效果是原始图像数据经过至少一种效果处理之后所需处理的图像效果。在当前效果所对应的当前节点处,利用当前节点的图像处理算法进行图像处理获得当前图像数据。比如,结合图4所示,当前效果可以是第一种图像效果:美颜,原始图像数据经过当前节点处理后获得美颜效果的图像数据,美颜效果的图像数据缓存至第一缓存空间,图4所对应的示例中第一缓存空间为FrameBuffer A。
S1201-2、从第一缓存空间获取当前图像数据,并对当前图像数据进行当前效果处理的下一级效果处理。此步骤中,下一级效果指代:按预设处理顺序分布的多种效果中与当前效果所相邻的下一个效果。当前效果的下一级效果所对应的节点记为下一级节点,下一级节点可从第一缓存空间中获取当前图像数据,并对当前图像数据进行下一级效果处理。比如,结合图4所示,当前效果可以是美颜,下一级效果可以是滤镜。原始图像数据经过当前节点处理后获得美颜效果的当前图像数据,当前图像数据经过下一级效果处理后获得滤镜效果的图像数据,此时所获得的图像数据已叠加美颜及滤镜两种效果。
S1201-3、将经过下一级效果处理的图像数据缓存至第二缓存空间。此步骤中,将在原始图像数据的基础上叠加有当前效果和下一级效果的图像数据,缓存至第二缓存空间。图4所对应的示例中第二缓存空间为FrameBuffer B。
本示例的处理方法中可以适用于任意两个相邻效果节点的图像处理,对相邻节点所获得的图像数据,分别在FrameBuffer A与FrameBuffer B二者之间进行交叉缓存,即采取ping-pong处理方式,直至全部效果都处理完毕获得目标图像数据。
在步骤S1202中,目标图像数据为经过全部效果处理之后的图像数据,即在原始图像数据上已叠加了全部用户选择的效果。目标缓存空间为目标图像数据所在的缓存空间。
在一个示例中,目标缓存空间为第一缓存空间或第二缓存空间。结合图4所对应的示例,多种效果中最后一种效果为美妆,经过此最后一种效果处理之后的图像数据为目标图像数据。图4所对应的示例中,目标图像数据缓存在第一缓存空间(此时第一缓存空间为FrameBuffer A),因此目标缓存空间指第一缓存空间。
从第一缓存空间获取到目标图像数据之后,在本示例的步骤S130中,OpenGL ES可将目标图像数据分别绘制于每个显示屏对应的画布对象EGLSurface上。图4所对应的示例中终端设备的显示屏分别为显示屏1、显示屏2和显示屏3,目标图像数据分别绘制于显示屏1对应的画布对象EGLSurface1、显示屏2对应的画布对象EGLSurface2以及显示屏3对应的画布对象EGLSurface3上,以实现显示屏1、显示屏2和显示屏3同步显示预览画面。
本公开实施例中,在图像处理线程中统一进行多种效果处理,有利于降低终端设备的功耗,并且能够提升效果处理的效率,有利于改善发热现象,并为用户提供更丝滑不卡顿的视觉体验。
在一个示例性的实施例中,本实施例的方法可以包括如图1所示的步骤S110至步骤 S130。如图3所示,本实施例中步骤S130可以包括如下步骤:
S1301、从目标缓存空间获取目标图像数据。
S1302、确定每个显示屏所分别对应的画布对象。
S1303、将目标图像数据分别绘制于每个画布对象上。
其中,在步骤S1301中,目标缓存空间用于缓存目标图像数据,可参考图2对应实施例中的描述。从目标缓存空间中可获取到经过全部效果处理的图像数据。
其中,在步骤S1302中,每个画布对象绑定有对应的屏幕画布,屏幕画布是终端设备的安卓操作系统框架中的画布,与显示屏存在对应关系。终端设备可为每个显示屏预先设置对应的屏幕画布,屏幕画布内部管理一个Surface(本地窗口),以便于在Surface上绘制即可以实现此Surface对应的显示屏的显示。
本步骤中,屏幕画布比如可以是SurfaceView或TextureView,每个显示屏对应有一个SurfaceView或TextureView,每个SurfaceView或TextureView内部管理一个Surface。
本步骤中,画布对象如EGLSurface与SurfaceView或TextureView存在绑定关系,也即每个显示屏都具有对应的画布对象EGLSurface以及对应的SurfaceView或TextureView。
在步骤S1303中,在确定每个显示屏所对应的画布对象EGLSurface之后,即可以将目标图像数据绘制于画布对象EGLSurface,进而实现对应显示屏的显示。当多个画布对象绘制完毕,多个显示屏即可实现同步显示预览画面。
在一个示例性的实施例中,当拍摄类应用程序启动时,本实施例的方法还可以包括如下步骤:
S101、响应于拍摄类应用程序启动,创建图像处理线程,和/或,创建数据接收对象。
其中,在拍摄类应用程序启动时,并在执行步骤S110之前,终端设备执行初始化操作。在应用层可设置有初始化接口,初始化接口用于初始化本公开实施例中拍摄预览方法所需的对象。本步骤中,可调用初始化接口创建线程并执行OpenGL ES的API,并创建数据接收对象如SurfaceTexture。在创建数据接收对象如SurfaceTexture之后,可将SurfaceTexture作为拍摄类应用程序的启动参数之一,以便于通过SurfaceTexture能够随时监听原始图像数据。
在一个示例性的实施例中,在拍摄类应用程序启动时,本实施例的方法还可以包括如下步骤:
S102、在多个显示屏中,为每个显示屏配置对应的屏幕画布。
S103、将每个显示屏对应的屏幕画布与每个显示屏对应的画布对象绑定。
其中,本实施例中的步骤S102和步骤S103为终端设备的初始化操作,在执行步骤S110之前执行。步骤S101、步骤S102和步骤S103之间的顺序仅供参考而非限定。
在步骤S102中,为终端设备的每个显示屏,设置对应的屏幕画布如屏幕画布SurfaceView或屏幕画布TextureView。在应用层可设置有SurfaceView/TextureView设置接口,通过调用此设置接口为每个显示屏设置对应的屏幕画布SurfaceView或屏幕画布TextureView。可以理解的,每一次调用设置接口为一个显示屏设置对应的屏幕画布,对于多个显示屏需要分别调用多次设置接口,直至每个显示屏对应的屏幕画布均设置完毕。
在步骤S103中,在设置每个显示屏所对应的屏幕画布SurfaceView或屏幕画布TextureView之后,可将每个显示屏对应的屏幕画布此显示屏对应的画布对象EGLSurface绑定,以便于在OpenGL ES中对画布对象EGLSurface进行绘制时,此画布对象所对应的显示屏即可显示所绘制的内容。
本公开实施例的拍摄预览方法所涉及的渲染框架,主要包括如上描述的三个主要对象:数据接收对象如SurfaceTexture、图像处理线程如OpenGL ES线程以及画布对象如EGLSurface。在进行初始化操作的过程中,屏幕画布SurfaceView或屏幕画布TextureView被关联到本实施例中的渲染框架中。
在初始化操作之后,可利用应用层或处理器或此处渲染框架执行前述实施例中所涉及的步骤S110至步骤S130。其中,SurfaceTexture可对图像数据进行监听,预设接口是渲染框架的内部接口,原始图像数据传递至预设接口时可发生数据回调事件。当存在数据回调,SurfaceTexture可获取原始图像数据。OpenGL ES线程对原始图像数据进行统一效果处理,并将所获得的目标图像数据绘制于EGLSurface,实现拍摄场景中多显示屏预览的效果。既能够提升用户体验,还避免了多线程的效果处理操作,有效节约终端设备的功耗,改善发热现象。
在一个示例性的实施例中,本公开实施例还提出了一种拍摄预览装置,应用于终端设备。如图5所示,本实施例的装置可以包括:获取模块110、调用模块120和绘制模块130,本实施例的装置用于实现如图1至图3所示的方法。其中,获取模块110用于获取相机采集的原始图像数据。调用模块120用于调用图像处理线程对原始图像数据进行效果处理,获得目标图像数据。绘制模块130用于将目标图像数据分别绘制于多个画布对象上,以使多个显示屏 分别显示目标图像数据对应的预览画面;其中,多个画布对象与多个显示屏之间具有一一对应关系。
如图6所示是一种终端设备的框图。本公开还提供了一种终端设备,例如,设备600可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
设备600可以包括以下一个或多个组件:处理组件602,存储器604,电力组件606,多媒体组件608,音频组件610,输入/输出(I/O)的接口612,传感器组件614,以及通信组件616。
处理组件602通常控制设备600的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件602可以包括一个或多个处理器620来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件602可以包括一个或多个模块,便于处理组件602和其他组件之间的交互。例如,处理组件602可以包括多媒体模块,以方便多媒体组件608和处理组件602之间的交互。
存储器604被配置为存储各种类型的数据以支持在设备600的操作。这些数据的示例包括用于在设备600上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件606为设备600的各种组件提供电力。电力组件606可以包括电源管理系统,一个或多个电源,及其他与为装置600生成、管理和分配电力相关联的组件。
多媒体组件608包括在设备600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件608包括一个前置摄像头和/或后置摄像头。当设备600处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件610被配置为输出和/或输入音频信号。例如,音频组件610包括一个麦克风(MIC),当设备600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器604或经由通信组件616发送。在一些实施例中,音频组件610还包括一个扬声器,用于输出音频信号。
I/O接口612为处理组件602和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件614包括一个或多个传感器,用于为设备600提供各个方面的状态评估。例如,传感器组件614可以检测到设备600的打开/关闭状态,组件的相对定位,例如组件为设备600的显示器和小键盘,传感器组件614还可以检测设备600或设备600一个组件的位置改变,用户与设备600接触的存在或不存在,设备600方位或加速/减速和装置600的温度变化。传感器组件614可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件614还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件614还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件616被配置为便于设备600和其他设备之间有线或无线方式的通信。设备600可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,通信组件616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,设备600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的方法。
本公开另一个示例性实施例中提供的一种非临时性计算机可读存储介质,例如包括指令的存储器604,上述指令可由设备600的处理器620执行以完成上述方法。例如,计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。当存储介质中的指令由终端设备的处理器执行时,使得终端设备能够执行上述的方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。
工业实用性
本公开的方法中,在拍摄场景中获取原始图像数据后,利用图像处理线程对原始图像数据进行统一的效果处理,然后将效果处理所获得的目标图像数据显示于多个显示屏上。从而降低终端设备的功耗,改善因多线程同步操作导致的终端设备功耗大的问题,进而减少终端设备的发热及卡顿现象。

Claims (11)

  1. 一种拍摄预览方法,其特征在于,应用于终端设备,方法包括:
    获取相机采集的原始图像数据;
    调用图像处理线程对所述原始图像数据进行效果处理,获得目标图像数据;
    将所述目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示所述目标图像数据表征的预览画面;其中,所述多个画布对象与所述多个显示屏之间具有一一对应关系。
  2. 根据权利要求1所述的拍摄预览方法,其特征在于,所述获取相机采集的原始图像数据,包括:
    响应于预设接口被调用,通过数据接收对象获取所述原始图像数据。
  3. 根据权利要求1所述的拍摄预览方法,其特征在于,所述调用图像处理线程对所述原始图像数据进行效果处理,获得目标图像数据,包括:
    在所述图像处理线程上,按预设处理顺序对所述原始图像数据依次进行多种效果的图像处理,获得目标图像数据;
    将所述目标图像数据缓存至目标缓存空间。
  4. 根据权利要求3所述的拍摄预览方法,其特征在于,所述图像处理线程的缓存空间包括第一缓存空间和第二缓存空间;
    所述按预设处理顺序对所述原始图像数据依次进行多种效果的图像处理,包括:
    根据所述预设处理顺序,将经过当前效果处理之后的当前图像数据缓存至第一缓存空间;
    从所述第一缓存空间获取所述当前图像数据,并对所述当前图像数据进行所述当前效果处理的下一级效果处理;
    将经过下一级效果处理的图像数据缓存至第二缓存空间。
  5. 根据权利要求4所述的拍摄预览方法,其特征在于,所述目标缓存空间为所述第一缓存空间或所述第二缓存空间。
  6. 根据权利要求1所述的拍摄预览方法,其特征在于,所述将所述目标图像数据分别绘制于多个画布对象上,包括:
    从目标缓存空间获取所述目标图像数据;其中,所述目标缓存空间用于缓存所述目标图像数据;
    确定每个所述显示屏所分别对应的所述画布对象,其中,每个所述画布对象绑定有对应的屏幕画布;
    将所述目标图像数据分别绘制于每个所述画布对象上。
  7. 根据权利要求1至6任一项所述的拍摄预览方法,其特征在于,所述方法还包括:
    响应于拍摄类应用程序启动,创建所述图像处理线程,和/或,创建数据接收对象。
  8. 根据权利要求1至6任一项所述的拍摄预览方法,其特征在于,在拍摄类应用程序启动时,所述方法还包括:
    在多个显示屏中,为每个所述显示屏配置对应的屏幕画布;
    将每个所述显示屏对应的屏幕画布与每个所述显示屏对应的画布对象绑定。
  9. 一种拍摄预览装置,其特征在于,应用于终端设备,装置包括:
    获取模块,用于获取相机采集的原始图像数据;
    调用模块,用于调用图像处理线程对所述原始图像数据进行效果处理,获得目标图像数据;
    绘制模块,用于将所述目标图像数据分别绘制于多个画布对象上,以使多个显示屏分别显示所述目标图像数据对应的预览画面;其中,所述多个画布对象与所述多个显示屏之间具有一一对应关系。
  10. 一种终端设备,其特征在于,包括:
    处理器;
    用于存储处理器的可执行指令的存储器;
    其中,所述处理器被配置为执行如权利要求1至8任一项所述的拍摄预览方法。
  11. 一种非临时性计算机可读存储介质,其特征在于,当所述存储介质中的指令由终端设备的处理器执行时,使得终端设备能够执行如权利要求1至8任一项所述的拍摄预览方法。
PCT/CN2022/095274 2022-05-26 2022-05-26 一种拍摄预览方法、装置、终端设备及存储介质 WO2023225949A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/095274 WO2023225949A1 (zh) 2022-05-26 2022-05-26 一种拍摄预览方法、装置、终端设备及存储介质
CN202280004626.2A CN117616747A (zh) 2022-05-26 2022-05-26 一种拍摄预览方法、装置、终端设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/095274 WO2023225949A1 (zh) 2022-05-26 2022-05-26 一种拍摄预览方法、装置、终端设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023225949A1 true WO2023225949A1 (zh) 2023-11-30

Family

ID=88918144

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095274 WO2023225949A1 (zh) 2022-05-26 2022-05-26 一种拍摄预览方法、装置、终端设备及存储介质

Country Status (2)

Country Link
CN (1) CN117616747A (zh)
WO (1) WO2023225949A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008211843A (ja) * 2008-05-19 2008-09-11 Casio Comput Co Ltd 撮像装置及び撮像制御プログラム
CN109862258A (zh) * 2018-12-27 2019-06-07 维沃移动通信有限公司 一种图像显示方法及终端设备
CN110264566A (zh) * 2019-06-03 2019-09-20 杭州小伊智能科技有限公司 一种基于人脸ai彩妆替换的多屏显示的装置及方法
CN111182196A (zh) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 拍照预览方法、智能终端及具有存储功能的装置
CN111343316A (zh) * 2020-05-25 2020-06-26 北京小米移动软件有限公司 一种基于折叠屏的终端
US20220019345A1 (en) * 2019-04-01 2022-01-20 Vivo Mobile Communication Co.,Ltd. Image editing method and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008211843A (ja) * 2008-05-19 2008-09-11 Casio Comput Co Ltd 撮像装置及び撮像制御プログラム
CN111182196A (zh) * 2018-11-13 2020-05-19 奇酷互联网络科技(深圳)有限公司 拍照预览方法、智能终端及具有存储功能的装置
CN109862258A (zh) * 2018-12-27 2019-06-07 维沃移动通信有限公司 一种图像显示方法及终端设备
US20220019345A1 (en) * 2019-04-01 2022-01-20 Vivo Mobile Communication Co.,Ltd. Image editing method and terminal
CN110264566A (zh) * 2019-06-03 2019-09-20 杭州小伊智能科技有限公司 一种基于人脸ai彩妆替换的多屏显示的装置及方法
CN111343316A (zh) * 2020-05-25 2020-06-26 北京小米移动软件有限公司 一种基于折叠屏的终端

Also Published As

Publication number Publication date
CN117616747A (zh) 2024-02-27

Similar Documents

Publication Publication Date Title
WO2020057327A1 (zh) 信息列表显示方法、装置及存储介质
US11183153B1 (en) Image display method and device, electronic device, and storage medium
EP3672262A1 (en) Operation method, device, apparatus and storage medium of playing video
CN106506448B (zh) 直播显示方法、装置及终端
EP3173923A1 (en) Method and device for image display
WO2020063084A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2017113666A1 (zh) 应用界面切换方法及装置
CN107341777B (zh) 图片处理方法及装置
WO2021013147A1 (zh) 视频处理方法、装置、终端及存储介质
WO2020073334A1 (zh) 扩展内容显示方法、装置、系统及存储介质
WO2023036179A1 (zh) 拍摄处理方法和电子设备
CN111078170B (zh) 显示控制方法、显示控制装置及计算机可读存储介质
CN107566878B (zh) 直播中显示图片的方法及装置
CN111866379A (zh) 一种图像处理方法、图像处理装置、电子设备和存储介质
WO2022156703A1 (zh) 一种图像显示方法、装置及电子设备
WO2023225949A1 (zh) 一种拍摄预览方法、装置、终端设备及存储介质
WO2023220957A1 (zh) 图像处理方法、装置、移动终端及存储介质
WO2022095878A1 (zh) 拍摄方法、装置、电子设备及可读存储介质
CN113157178B (zh) 一种信息处理方法及装置
WO2019061118A1 (zh) 全景拍摄方法及终端
CN115248711A (zh) 显示屏刷新率的调整方法、装置、终端和存储介质
CN110312117B (zh) 数据刷新方法及装置
EP3799415A2 (en) Method and device for processing videos, and medium
CN110489040B (zh) 特征模型展示的方法及装置、终端和存储介质
CN111835977B (zh) 图像传感器、图像生成方法及装置、电子设备、存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280004626.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22943149

Country of ref document: EP

Kind code of ref document: A1