CN114449171B - Method for controlling camera, terminal device, storage medium and program product - Google Patents

Method for controlling camera, terminal device, storage medium and program product Download PDF

Info

Publication number
CN114449171B
CN114449171B CN202210108459.8A CN202210108459A CN114449171B CN 114449171 B CN114449171 B CN 114449171B CN 202210108459 A CN202210108459 A CN 202210108459A CN 114449171 B CN114449171 B CN 114449171B
Authority
CN
China
Prior art keywords
camera
angle
cameras
image
clockwise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210108459.8A
Other languages
Chinese (zh)
Other versions
CN114449171A (en
Inventor
冯坤
聂和平
周春萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202210108459.8A priority Critical patent/CN114449171B/en
Publication of CN114449171A publication Critical patent/CN114449171A/en
Application granted granted Critical
Publication of CN114449171B publication Critical patent/CN114449171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application discloses a method for controlling a camera, terminal equipment, a storage medium and a program product, which are used for solving the problem that the camera function and application in single-front-end camera equipment cannot be adapted. Firstly, responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of terminal equipment; if the number of the physical cameras is one and the physical cameras are the front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and the logic cameras, the logic cameras are used as the rear cameras, and parameters of the rear cameras are the same as those of the front cameras; and then, configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, wherein the camera calling interface is used for supporting an upper layer application to call the front camera or the rear camera. In summary, the virtual logic camera is established to replace the rear camera, so that the camera can be normally used, and the use experience of a user is improved.

Description

Method for controlling camera, terminal device, storage medium and program product
Technical Field
The application belongs to the technical field of intelligent terminals, and particularly relates to a method for controlling a camera, terminal equipment, a storage medium and a program product.
Background
As smartphones increasingly use photographing functions, a good photographing experience is more and more highly appreciated by users.
In the prior art, the mobile phone model is two or more cameras, and comprises a front camera and a rear camera, and downloadable applications are designed for the model. When these applications are installed on a model with only one front camera and no rear camera, the camera function is often not normally used. For example, when scanning two-dimensional codes or face recognition, the problems that a camera cannot be started and angles are deflected can occur.
Therefore, how to implement camera functions and application adaptations in single front-end camera devices needs to be addressed.
Disclosure of Invention
The application aims to provide a method for controlling a camera, terminal equipment, a storage medium and a program product, which are used for solving the problem that the camera function and application in single front-end camera equipment cannot be adapted.
In a first aspect, the present application provides a method of controlling a camera, the method comprising:
Responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of the terminal equipment;
if the number of the physical cameras is one and the physical cameras are front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and logic cameras, the logic cameras are used as rear cameras, and parameters of the rear cameras are the same as those of the front cameras;
and configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, wherein the camera calling interface is used for supporting an upper layer application to call the front camera or the rear camera.
In some embodiments, the creating the dual camera object specifically includes:
configuring the mark of the front camera as a first mark, configuring the orientation attribute of the front camera as a first attribute and recording the camera parameters of the front camera; and
and configuring the mark of the rear camera as a second mark, configuring the orientation attribute of the rear camera as a second attribute, and recording the camera parameters of the front camera as the camera parameters of the rear camera.
In some embodiments, the method further comprises:
responding to the call of the front camera or the call of the rear camera, and calling the front camera to acquire an image;
if the front camera is called to collect images, the collected images are output and displayed after first image conversion based on imaging characteristics of the front camera;
and if the rear camera is called to acquire an image, performing second image conversion on the acquired image based on the imaging characteristics of the rear camera, and outputting and displaying the image.
In some embodiments, the performing the first image conversion on the acquired image based on the imaging feature of the front camera and then outputting and displaying the image specifically includes:
acquiring a rotation angle of terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
and rotating the collected image by a specified clockwise angle based on the rotation angle control, and displaying the upper left corner of the display screen as the upper left corner of the image.
In some embodiments, the controlling the acquired image to rotate by a specified clockwise angle based on the rotation angle specifically includes:
if the terminal equipment rotates clockwise by a first angle, the acquired image is displayed by rotating clockwise by a second angle; the first angle is within a first specified range based on 0 degrees, and the second angle is within a second specified range based on 90 degrees;
If the terminal equipment rotates clockwise by a third angle, the acquired image is rotated clockwise by a fourth angle to be displayed; the third angle is in a third specified range based on 90 degrees, and the fourth angle is in a fourth specified range based on 180 degrees;
if the terminal equipment rotates clockwise by a fifth angle, the acquired image is displayed by rotating clockwise by a sixth angle; the fifth angle is within a fifth specified range based on 180 degrees, and the sixth angle is within a sixth specified range based on 270 degrees;
if the terminal equipment rotates clockwise by a seventh angle, the acquired image is displayed by rotating clockwise by an eighth angle; the seventh angle is within a seventh specified range based on 270 degrees, and the eighth angle is within an eighth specified range based on 360 degrees.
In some embodiments, the performing the second image conversion on the acquired image based on the imaging feature of the rear camera and then outputting and displaying the second image includes:
acquiring the rotation angle of the terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
And controlling the collected image to rotate by a specified clockwise angle based on the rotation angle, and turning the collected image after rotation left and right for display.
In some embodiments, the specified clockwise angle is determined using the following formula:
y=(a+x)%b
wherein y represents the designated clockwise angle, x represents the rotation angle, and a and b are constants.
In a second aspect, the present application further provides a terminal device:
a display for displaying the acquired image;
a memory for storing executable instructions of the processor;
a processor for executing the executable instructions to implement any of the methods as provided in the first aspect of the application.
In a third aspect, an embodiment of the application also provides a computer readable storage medium, which when executed by a processor of a terminal device, enables the terminal device to perform any of the methods as provided in the first aspect of the application.
In a fourth aspect, an embodiment of the application provides a computer program product comprising a computer program which, when executed by a processor, implements any of the methods as provided in the first aspect of the application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the application establishes the virtual logic camera to replace the rear camera, and simultaneously carries out corresponding adaptation according to different upper layer applications, thereby being compatible with the camera shooting functions of the upper layer applications and the single camera equipment and enabling the camera shooting and scanning functions to be normally used. In summary, when the user uses the scheme provided by the application, the problem of software adaptation of the device with the single front-end camera is solved, and the use experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed. On the basis of conforming to the common knowledge in the field, the above preferred conditions can be arbitrarily combined to obtain the preferred embodiments of the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a schematic diagram of a software architecture of a terminal according to an embodiment of the present application;
fig. 3 is an application interface schematic diagram of a method for controlling a camera according to an embodiment of the present application;
fig. 4 is a flowchart of a method for controlling a camera according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an application calling front camera to adapt a collected image to a display screen according to an embodiment of the present application;
fig. 6 is a schematic view of angle conversion of a terminal device, an acquired image, a display image and a display effect according to an embodiment of the present application;
fig. 7 is a schematic flow chart of an image collected based on rotation angle control displayed on a display screen according to an embodiment of the present application;
fig. 8 is a schematic flow chart of an application calling a rear camera to adapt a collected image to a display screen according to an embodiment of the present application;
fig. 9 is a schematic view of another angle conversion of the terminal device, the collected image, the display image and the display effect according to the embodiment of the present application;
fig. 10 is a schematic diagram of the overall architecture of a method for controlling a camera according to an embodiment of the present application;
Fig. 11 is an overall flowchart of a method for controlling a camera according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Wherein the described embodiments are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Also, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and furthermore, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or the like, may include one or more such features, either explicitly or implicitly.
First, fig. 1 shows a schematic configuration of a terminal 100.
The embodiment will be specifically described below with reference to the terminal 100 as an example. It should be understood that the terminal 100 shown in fig. 1 is only one example, and that the terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A hardware configuration block diagram of the terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the terminal 100 includes: radio Frequency (RF) circuitry 110, memory 120, display unit 130, camera 140, sensor 150, audio circuitry 160, wireless fidelity (Wireless Fidelity, wi-Fi) module 170, processor 180, bluetooth module 181, and power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, and may receive downlink data of the base station and then transmit the downlink data to the processor 180 for processing; uplink data may be sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers, diplexers, and the like.
Memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal 100 and data processing by running software programs or data stored in the memory 120. Memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The memory 120 stores an operating system that enables the terminal 100 to operate. The memory 120 of the present application may store an operating system and various application programs, and may also store program codes for performing the methods of the embodiments of the present application.
The display unit 130 may be used to receive input digital or character information, generate signal inputs related to user settings and function control of the terminal 100, and in particular, the display unit 130 may include a touch screen 131 provided on the front surface of the terminal 100, and collect touch operations thereon or thereabout by a user, such as starting a camera, closing the camera, clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display information input by a user or information provided to the user and a graphical user interface (graphical user interface, GUI) of various menus of the terminal 100. In particular, the display unit 130 may include a display 132 disposed on the front of the terminal 100. The display 132 may be configured in the form of a liquid crystal display, light emitting diodes, or the like. The display unit 130 may be used to display an interface for enabling the user to start the camera to perform photographing scanning in the present application.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement input and output functions of the terminal 100, and after integration, the touch screen may be simply referred to as a touch display screen. The display unit 130 may display the application program and the corresponding operation steps in the present application.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the processor 180 for conversion into a digital image signal.
The terminal 100 may further include at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal 100 may also be configured with other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, light sensors, motion sensors, and the like.
Audio circuitry 160, speaker 161, microphone 162 can provide an audio interface between the user and terminal 100. The audio circuit 160 may transmit the received electrical signal converted from audio data to the speaker 161, and the speaker 161 converts the electrical signal into a sound signal and outputs the sound signal. The terminal 100 may also be configured with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and converted into audio data, which is output to the RF circuit 110 for transmission to, for example, another terminal, or to the memory 120 for further processing. The microphone 162 of the present application may acquire the voice of the user.
Wi-Fi belongs to a short-range wireless transmission technology, and the terminal 100 can help a user to send and receive e-mail, browse web pages, access streaming media and the like through the Wi-Fi module 170, so that wireless broadband internet access is provided for the user.
The processor 180 is a control center of the terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, the processor 180 may include one or more processing units; the processor 180 may also integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., and a baseband processor that primarily handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. The processor 180 of the present application may run an operating system, application programs, user interface displays and touch responses, as well as methods described in embodiments of the present application. In addition, the processor 180 is coupled with the display unit 130.
The bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal 100 may establish a bluetooth connection with a wearable terminal device (e.g., a smart watch) also provided with a bluetooth module through the bluetooth module 181, thereby performing data interaction.
The terminal 100 also includes a power supply 190 (e.g., a battery) that provides power to the various components. The power supply may be logically connected to the processor 180 through a power management system, so that functions of managing charge, discharge, power consumption, etc. are implemented through the power management system. The terminal 100 may also be configured with power buttons for powering on and off the terminal, and for locking the screen, etc.
Fig. 2 is a software configuration block diagram of the terminal 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the application framework layer may be divided into a java side including a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, an application manager, and the like, and a native side.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, short messages, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including an interface for controlling a single front camera and a short message notification icon may include a view for displaying text and a view for displaying a picture.
The telephony manager is used to provide the communication functions of the terminal 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information (e.g., message digest of short message, message content) in a status bar, can be used to convey notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal vibrates, and an indicator light blinks.
The native side services are located on the native side of the application framework layer, adjacent to the system library.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), and camera services, among others.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
A 2D (one way of animation) graphics engine is a drawing engine for 2D drawing.
The camera service is used for commonly seen logical camera objects and is configured with corresponding parameter information and the like.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the terminal 100 software and hardware is illustrated below in connection with capturing a photo scene.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a still image or video through the camera 140.
The terminal 100 in the embodiment of the application can be a terminal device with a single front-facing camera, such as a mobile phone, a tablet computer, a wearable device, a notebook computer, a television and the like. The method for controlling the camera provided by the application is described below with reference to the embodiment.
The inventive concept of the present application can be summarized as follows: firstly, responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of terminal equipment; if the number of the physical cameras is one and the physical cameras are the front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and the logic cameras, the logic cameras are used as the rear cameras, and parameters of the rear cameras are the same as those of the front cameras; and then, configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface so as to support the upper layer application to call the front camera or the rear camera. In summary, the embodiment of the application can establish a virtual logic camera to replace a rear camera under the current technical condition, and simultaneously correspondingly adapt according to the difference of upper applications, so that the device of the upper applications and the single camera is compatible, the device can normally use a photographing scanning function, and the use experience of a user is improved.
After the main inventive concept of the embodiments of the present application is introduced, some simple descriptions are made below on application scenarios applicable to the technical solution of the embodiments of the present application, and it should be noted that the application scenarios described below are only used to illustrate the embodiments of the present application and are not limiting. In the specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 3, an interface schematic diagram of a user opening a camera to perform photographing scanning according to an embodiment of the present application is shown. The interface is an interface for opening a camera function, and comprises functions of photographing, shooting, portrait and the like, the equipment only comprises a front camera, if an upper layer application calls a rear camera, a logic camera is called to replace the rear camera, and proper adaptation is carried out when a display screen displays, so that the compatibility of the application and the equipment is realized.
Of course, the usage scenario provided by the embodiment of the present application is not limited to the application scenario shown in fig. 3, but may be used in other possible application scenarios, and the embodiment of the present application is not limited.
Based on the above description, the embodiment of the present application provides a method for controlling a camera, and a key workflow chart of the method is shown in fig. 4, and may include the following contents:
in step 401, in response to the camera activation instruction, the number of physical cameras and the camera parameter information of the terminal device are acquired.
In some embodiments, in a terminal device including two physical cameras, a front camera and a rear camera, the camera identification and the attribute thereof are specifically shown in the following table 1:
TABLE 1
Physical hardware Camera ID The facility attribute
Rear camera 0 back
Front camera 1 front
The terminal equipment comprises two physical cameras, so that two camera objects, namely a front camera and a rear camera, are created. The Camera ID corresponding to the front Camera is 1, the Camera ID corresponding to the rear Camera is 0, and the orientation attribute, namely the face attribute, corresponds to front and back respectively.
In step 402, if the number of physical cameras is one and the physical cameras are front cameras, a dual-camera object is created, the dual-camera object includes the front cameras and the logic cameras, the logic cameras are rear cameras, and the camera parameters of the rear cameras are configured to be the same as those of the front cameras.
For terminal devices comprising only one front-facing camera, two dual-camera objects still need to be created in order to adapt to the requirements of different upper layer applications, since the prior art does not adapt according to this type of device. The camera identifier and attribute of the terminal device originally including only one front camera are specifically shown in the following table 2:
TABLE 2
Physical hardware Camera ID The facility attribute
Front camera 1 front
Wherein there is only one front camera object.
The application firstly enumerates the number of cameras on physical hardware when Camera service is started, the Camera service is positioned in Hardware Abstraction Layer layers, namely HAL layers, corresponding to a system library in figure 2, after only one front physical Camera is determined, a double-Camera object is created, the double-Camera object comprises the front Camera and a logic Camera, a virtual logic Camera is used as a rear Camera, and the Camera identification and the attribute of terminal equipment adopting the method provided by the application are specifically shown in the following table 3:
TABLE 3 Table 3
Physical hardware Camera ID The facility attribute
Front camera 0 back
Front camera 1 front
In order to adapt to the upper layer application, in the embodiment of the present application, as shown in table 3, the front camera has two identities, one is a front camera, and the other is a logic camera (i.e. a virtual rear camera). In table 3, the "front camera" is an attribute of "physical hardware" and indicates that the physical hardware of the two cameras (i.e., the front camera and the rear camera) are both front cameras. The Camera ID is used to represent the identity of the Camera. The method comprises the steps of configuring an identification of a called front Camera as a first identification, namely a Camera ID in a table 3 is 1, configuring an orientation attribute of the front Camera as a first attribute, namely a face attribute is front, and recording Camera parameters (not shown in the table) of the front Camera; and configuring the identifier of the logic Camera (i.e. the rear Camera) as a second identifier, i.e. the Camera ID in table 3 is 0, configuring the orientation attribute of the logic Camera (i.e. the rear Camera) as a second attribute, i.e. the face attribute is back, recording the Camera parameters of the front Camera as the Camera parameters of the rear Camera, i.e. the parameters such as the sizes of the real front Camera and the rear logic Camera are the same, and calling the physical front Camera when actually calling.
After the creation of the logical Camera objects is completed, the calling relationship between the two Camera objects and the real physical Camera is established, the driving of the real physical Camera is positioned at the lower layer of the HAL layer where the Camera service is positioned, namely the kernel layer in FIG. 2, and then the following adaptation of the Camera calling interface is performed after the calling relationship between the two Camera objects and the real physical Camera is established.
In step 403, the camera parameters of the front camera and the camera parameters of the rear camera are configured into a camera call interface, where the camera call interface is used to support the upper layer application to call the front camera or the rear camera.
In some embodiments, the Camera service reports the Camera combination information, including calling relation, identifier, attribute, parameters, etc., to the Camera frame work interface, and when the interface configures the Camera related information, the Camera service configures the Camera combination information according to the Camera parameters of the front Camera and the Camera parameters of the rear Camera. It should be noted that the Camera framework interface is located on the java side of the application framework layer in fig. 2.
If the target Camera is called by the upper layer application, reading the table 3 through the Camera frame interface by the upper layer application, so as to acquire the calling relation and parameter information of the target Camera, and finally, calling the target Camera by the upper layer application. For example, if the target Camera called by the upper layer application is a rear Camera, the upper layer application reads the above table 3 through the Camera frame interface, so as to obtain the calling relationship and parameter information of the rear Camera, and finally, the upper layer application calls the rear Camera according to the calling relationship.
Under the existing android system, the interfaces of the Camera called by the upper layer application are divided into two modes, namely a v1 interface and a v2 interface.
1. For the interface v1, the application obtains the camera information through the interface of the getInfo, so that the camera parameters of the front camera and the camera parameters of the rear camera need to be configured in the interface of the getInfo, so that the information of the two cameras can be correctly obtained.
2. For the interface of v2, the application acquires the camera information by acquiring meta data (metadata), so that it is necessary to correctly configure the camera information at the position where meta data is generated.
In addition, with technology updating, if there are other expansion interfaces, the camera information needs to be configured correctly for different expansion interfaces one by one, so that the front camera and the virtual rear camera can be called through the expansion interfaces.
After the configuration of the Camera calling interface is completed, if the application calls the frame interface of the Camera, the single front-end Camera device can be called according to the configuration comprising two Camera devices, and the front-end Camera and the rear-end Camera are respectively opened, but after a calling instruction is issued, no matter the calling instruction is the front-end Camera or the rear-end Camera, the front-end Camera in physical sense is actually opened.
In some embodiments, since the rear camera is a virtual logical camera created by the present application, and the front camera is actually used, the image data collected by the front camera and the real rear camera are different when displayed on the display screen, for example, the image data collected by the front camera should be displayed in a mirror image when displayed on the display screen, for example, the terminal device rotates 90 degrees, and the image displayed on the display screen needs to be correspondingly rotated so as to be matched with the angle observed by the human eye, or the image displayed on the display screen when the terminal device is horizontally arranged needs to be correspondingly rotated, and so on.
Therefore, in order to accurately preview images, images output by the camera are required to be accurately displayed on the display screen to a user.
In some embodiments, if the application invokes the front-facing camera to collect an image, the collected image is output and displayed after performing a first image conversion based on imaging features of the front-facing camera. The specific flow chart as follows fig. 5 may be implemented as:
in step 501, the rotation angle of the terminal device is acquired.
It should be noted that, the rotation angle can be obtained through the system gyroscope sensor, and the rotation angle is the rotation angle of the plane where the display screen of the terminal device is located, if the device gyroscope is turned on, as shown in fig. 6, the display screen can be switched in four directions, and the coordinate origin point corresponding to rendering can be switched in four corners of the physical screen, and the vertex of the 4 corners is defined as A, B, C, D. Since the origin of coordinates is the upper left corner of the screen display, the origin of coordinates represented by the upper left corner black dot in fig. 6 will be transformed in A, B, C, D, with the black arrow labeled as the positive direction of the camera, according to which the rotation angle of the terminal device is conveniently distinguished, and the transformation relationship between the acquired image and the display direction is conveniently determined. As can be seen from fig. 6, the clockwise rotation angles of the terminal device are typically 0 degrees, 90 degrees, 180 degrees and 270 degrees, respectively, but are not limited to the above rotation angles.
It should be noted that, the black arrows in fig. 6 all point to the positive direction of the camera, which is unchanged relative to the terminal device, i.e., the direction of the camera is perpendicular to the direction in which the transverse terminal device is located.
In step 502, after the acquired image is rotated by a specified clockwise angle based on the rotation angle control, the upper left corner of the display screen is displayed as the upper left corner of the image.
In some embodiments, after the collected image is rotated by a specified clockwise angle based on the rotation angle control, the upper left corner of the display screen is displayed as the upper left corner of the image, specifically, the collected image is shown in the second column of various cases (including (1), (2), (3) (4)) in fig. 6, and the collected image is used as a mirror image of the collected object based on the display manner in the case of the transverse screen, so the manner of the front camera for collecting the rotation angle of the adaptive terminal of the image can be implemented as shown in fig. 7, including:
in step 701, if the terminal device rotates clockwise by a first angle, the acquired image is rotated clockwise by a second angle for display.
Since it is considered that the user does not need to accurately control the accuracy of the rotation angle, in the embodiment of the present application, the first angle is within a first specified range with reference to 0 degrees (the terminal condition shown in case (1) in fig. 6 is the first angle), and the second angle is within a second specified range with reference to 90 degrees (the rotation angle for the acquired image is the second angle in case (1) in fig. 6).
In step 702, if the terminal device rotates clockwise by a third angle, the acquired image is rotated clockwise by a fourth angle for display. Also, since it is considered that the user does not need to accurately control the accuracy of the rotation angle, in the embodiment of the present application, the third angle is within a third specified range based on 90 degrees (e.g., 90 degrees clockwise for the rotation of the terminal in case (2) in fig. 6 is the third angle), and the fourth angle is within a fourth specified range based on 180 degrees (e.g., 180 degrees clockwise for the rotation of the image in case (2) in fig. 6).
In step 703, if the terminal device rotates clockwise by the fifth angle, the acquired image is displayed by rotating clockwise by the sixth angle, and also because the user does not need to accurately control the precision of the rotation angle, in the embodiment of the present application, the fifth angle is within the fifth specified range based on 180 degrees (e.g. 180 degrees clockwise by the rotation of the terminal in case (3) of fig. 6 is the fifth angle), and the sixth angle is within the sixth specified range based on 270 degrees (e.g. 270 degrees clockwise by the rotation of the image in case (3) of fig. 6).
In step 704, if the terminal device rotates clockwise by the seventh angle, the acquired image is displayed by rotating clockwise by the eighth angle, and also because the user does not need to accurately control the precision of the rotation angle, in the embodiment of the present application, the seventh angle is within the seventh specified range based on 270 degrees (e.g. 270 degrees clockwise by the rotation of the terminal in case (4) of fig. 6 is the seventh angle), and the eighth angle is within the eighth specified range based on 360 degrees (e.g. 360 degrees clockwise by the sixth angle in case (4) of fig. 6).
As shown in fig. 6, the relative positions of the four vertices A, B, C, D of the image acquired by the front camera of the terminal device are unchanged, and corresponding adjustment and adaptation are performed when the display screen displays according to the rotation angle of the terminal device. In case (1) included in fig. 6, the terminal device rotates clockwise by 0 degrees to display the collected image, in case (2) included in the second dotted line frame, the terminal device rotates clockwise by 90 degrees to display the collected image, in case (3) in fig. 6, the terminal device rotates clockwise by 180 degrees to display the collected image, in case (4) in fig. 6, the terminal device rotates clockwise by 270 degrees to display the collected image, and in case (4) in fig. 6, the terminal device rotates clockwise by 270 degrees to display the collected image. According to the application, the image displayed on the display screen is adjusted and adapted according to the rotation angle of the terminal equipment, so that the use experience of a user is better improved.
Summarizing, invoking the rotation operation of the front camera on the image can be implemented based on formula (1):
the application adopts the following formula (1) to determine the appointed clockwise angle:
y=(a+x)%b (1)
Wherein y represents a designated clockwise angle, x represents a rotation angle, and a and b are constants. % represents a remainder operation, a may be taken as 90, b may be taken as 360, and then the formula specifying the clockwise angle is determined as shown in the following formula (2):
y=(90+x)%360 (2)
in other embodiments, if the application invokes the rear camera to collect an image, the collected image is output for display after performing a second image conversion based on the imaging characteristics of the front camera. Different from the mirror image display of the image acquired by the front camera, the image acquired by the rear camera is in the same direction as the image acquired by the user in theory, and because the image acquired by the front camera is mirror image, the virtual rear camera needs to carry out mirror image overturning treatment on the acquired image so as to achieve the effect of being acquired by the rear camera in vision. For example, the images are photographed, in theory, the images obtained by photographing by the front camera are mirrored relative to the photographed object, and the display effect of the rear camera is not mirrored, so that mirror image overturning processing is required to be performed on the images when the virtual rear camera is called. Therefore, the process of adapting the rear camera may be implemented as shown in fig. 8:
In step 801, a rotation angle of a terminal device is acquired.
In step 802, the acquired image is rotated by a specified clockwise angle based on the rotation angle control, and the rotated acquired image is flipped left and right for display.
The specified clockwise angle is determined by the following equation (1):
y=(a+x)%b (1)
wherein y represents a designated clockwise angle, x represents a rotation angle, and a and b are constants. a may be taken as 90 and b may be taken as 360, then the formula for determining the specified clockwise angle is shown as formula (2) below:
y=(90+x)%360 (2)
as shown in fig. 9, the relative positions of the four vertices A, B, C, D of the image acquired by the front camera of the terminal device are unchanged, and corresponding adjustment and adaptation are performed when the display screen displays according to the rotation angle of the terminal device. In case (1) included in fig. 9, the terminal device rotates clockwise by 0 degrees, and the acquired image is rotated clockwise by 90 degrees and turned left and right to be displayed. In case (2) included in fig. 9, the terminal device rotates 90 degrees clockwise, and the acquired image is rotated 180 degrees clockwise and turned left and right for display. In case (3) included in fig. 9, the terminal device rotates 180 degrees clockwise, and the captured image is rotated 270 degrees clockwise and turned left and right to be displayed. In case (4) included in fig. 9, the terminal device rotates 270 degrees clockwise, and the captured image is rotated 360 degrees clockwise, and the image is flipped left and right for display. Therefore, compared with the case of calling the front camera in fig. 6, when the front camera is finally displayed in fig. 9, the hairpin position changes, which means that mirror image overturning is needed when the upper layer shall call the rear camera (namely the logic camera) relative to the front camera.
After the angle rotation is completed, the images to be displayed are uploaded to the upper layer application, so that the correct display angle of the application display interface is realized, the use habit of a user is facilitated, and the use experience of the user is improved.
It should be added that, the overall architecture diagram of the method for controlling a camera provided by the application is shown in fig. 10:
the Camera interface is located on the java side of the application framework layer in fig. 2 and is used for application call, the single front-end Camera is a hardware layer of the terminal device, the Camera service is located in the system library in fig. 2, two logic Camera objects are created in the system library, the collected images are adapted to the display screen through the angle adjusting module, and finally the images are displayed on the display screen.
In some embodiments, the overall flowchart of the method for controlling a camera provided by the present application is as follows in fig. 11, and may be implemented as follows:
in step 1101, in response to the camera activation instruction, the camera service is activated.
In step 1102, the number and parameters of cameras are read based on the camera service.
In step 1103, if a front camera is read, a front camera object and a logic camera object are created, and the logic camera corresponding to the logic camera object is used as a rear camera. Based on the above, the application realizes that the terminal equipment with the single front-end camera comprises two camera objects, namely the front-end camera object and the logic camera object, the logic camera object is adapted to the call of the upper-layer application to the rear-end camera, and the front-end camera object is adapted to the call of the upper-layer application to the front-end camera.
In step 1104, the front camera object and the logical camera object are configured with parameters of the physical front camera. In the configuration process, the application uses the parameter information of the physical front-facing camera to configure the two camera objects, for example, the size of the physical front-facing camera is configured to be the size of the two cameras. In addition, the identity and orientation properties of the logical camera object and the front camera object are different, and as shown in table 3 above, the identity and orientation properties of the front camera object are 1 and front, respectively, and the identity and orientation properties of the logical camera object are 0 and back, respectively.
In step 1105, a call relationship between the two camera objects and the physical camera is established. When two camera objects are configured, the application establishes the calling relationship between the two camera objects and the physical camera at the same time, as shown in the table 3, the corresponding relationship between the physical hardware and the camera objects is the calling relationship, and because the terminal equipment only comprises one front camera, if the upper application calls the cameras, the actual calling is the physical front camera.
In step 1106, relevant information of the camera invocation interface is configured based on parameters and invocation relationships of the front camera object and the logical camera object. The related information comprises a calling relation between the camera objects and the physical hardware, and also comprises parameter information of the two camera objects. The camera is called by the upper layer application through a camera calling interface, and in order to facilitate the camera calling interface to call the relevant camera, relevant information required by the camera calling interface is configured into a corresponding interface, namely, information contained in the table 3 is configured into the relevant camera calling interface. For example, if the upper layer application calls the rear Camera (i.e. the logic Camera) to realize scanning the two-dimensional code, the upper layer application reads the above table 3 through a call interface (e.g. a Camera frame interface) of the rear Camera (i.e. the logic Camera), so as to obtain a call relationship of the rear Camera, reads the call relationship from the table 3, and calls the physical front Camera according to the call relationship, so that the upper layer application calls the rear Camera, and completes the final code scanning operation. Therefore, the application realizes the adaptation of the upper layer application and the code scanning function. Based on the above embodiment, the embodiment of the application can establish the virtual logic camera to replace the rear camera under the current technical condition, and simultaneously, the virtual logic camera is correspondingly adapted according to the different upper applications, so that the device of the upper applications and the single camera is compatible, the photographing scanning function can be normally used, and the use experience of a user is improved.
In an exemplary embodiment, the present application also provides a computer-readable storage medium including instructions, such as the memory 120 including instructions, executable by the processor 180 of the terminal 100 to perform the above-described method of controlling a camera. Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by the processor 180, implements a method of controlling a camera as provided by the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. A method of controlling a camera, the method comprising:
responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of the terminal equipment;
if the number of the physical cameras is one and the physical cameras are front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and logic cameras, the logic cameras are used as rear cameras, and the parameters of the cameras of the rear cameras are the same as those of the front cameras;
Configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, wherein the camera calling interface is used for supporting an upper layer application to call the front camera or the rear camera;
the method further comprises the steps of:
responding to the call of the front camera or the call of the rear camera, and calling the front camera to acquire an image;
if the front camera is called to collect images, the collected images are output and displayed after first image conversion based on imaging characteristics of the front camera;
if the rear camera is called to collect images, performing second image conversion on the collected images based on imaging characteristics of the rear camera, and outputting and displaying the images;
the second image conversion is carried out on the acquired image based on the imaging characteristics of the rear camera, and then the acquired image is output and displayed, and the method specifically comprises the following steps:
acquiring the rotation angle of the terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
the collected images are controlled to rotate by a specified clockwise angle based on the rotation angle, and the rotated collected images are turned left and right to be displayed;
The specified clockwise angle is determined using the following formula:
y=(a+x)%b
wherein y represents the designated clockwise angle, x represents the rotation angle, and a and b are constants.
2. The method according to claim 1, wherein the creating a dual camera object specifically comprises:
configuring the mark of the front camera as a first mark, configuring the orientation attribute of the front camera as a first attribute and recording the camera parameters of the front camera; and
and configuring the mark of the rear camera as a second mark, configuring the orientation attribute of the rear camera as a second attribute, and recording the camera parameters of the front camera as the camera parameters of the rear camera.
3. The method according to claim 1, wherein the performing the first image conversion on the acquired image based on the imaging feature of the front camera and then outputting and displaying the image comprises:
acquiring a rotation angle of terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
and after the collected image is controlled to rotate by a specified clockwise angle based on the rotation angle, overlapping and displaying the upper left corner of the collected image after rotation and the upper left corner of the display screen of the terminal equipment after rotation.
4. A method according to claim 3, wherein said controlling the acquired image rotation by a specified clockwise angle based on said rotation angle, in particular comprises:
if the terminal equipment rotates clockwise by a first angle, the acquired image is displayed by rotating clockwise by a second angle; the first angle is within a first specified range based on 0 degrees, and the second angle is within a second specified range based on 90 degrees;
if the terminal equipment rotates clockwise by a third angle, the acquired image is rotated clockwise by a fourth angle to be displayed; the third angle is in a third specified range based on 90 degrees, and the fourth angle is in a fourth specified range based on 180 degrees;
if the terminal equipment rotates clockwise by a fifth angle, the acquired image is displayed by rotating clockwise by a sixth angle; the fifth angle is within a fifth specified range based on 180 degrees, and the sixth angle is within a sixth specified range based on 270 degrees;
if the terminal equipment rotates clockwise by a seventh angle, the acquired image is displayed by rotating clockwise by an eighth angle; the seventh angle is within a seventh specified range based on 270 degrees, and the eighth angle is within an eighth specified range based on 360 degrees.
5. A terminal device, comprising:
a display for displaying the acquired image;
a memory for storing executable instructions of the processor;
a processor for executing the executable instructions to implement the steps of the method of controlling a camera as claimed in any one of claims 1 to 4.
6. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of a terminal device, enable the terminal device to perform the steps of the method of controlling a camera according to any one of claims 1-4.
CN202210108459.8A 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product Active CN114449171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210108459.8A CN114449171B (en) 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210108459.8A CN114449171B (en) 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product

Publications (2)

Publication Number Publication Date
CN114449171A CN114449171A (en) 2022-05-06
CN114449171B true CN114449171B (en) 2023-10-27

Family

ID=81372033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210108459.8A Active CN114449171B (en) 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114449171B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079886B (en) * 2022-07-21 2023-01-03 荣耀终端有限公司 Two-dimensional code recognition method, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851172A (en) * 2017-03-27 2017-06-13 青岛海信电器股份有限公司 The call method of camera, apparatus and system
KR20200070611A (en) * 2018-12-10 2020-06-18 (주)테슬라시스템 Formation and Application Method of Virtual Camera Image for Smartphone
CN112804459A (en) * 2021-01-12 2021-05-14 杭州星犀科技有限公司 Image display method and device based on virtual camera, storage medium and electronic equipment
CN113448648A (en) * 2021-06-29 2021-09-28 技德技术研究所(武汉)有限公司 Linux-compatible Android camera access method and device
WO2022012418A1 (en) * 2020-07-16 2022-01-20 华为技术有限公司 Photographing method and electronic device
CN113965809A (en) * 2020-11-30 2022-01-21 深圳卡多希科技有限公司 Method and device for simultaneous interactive live broadcast based on single terminal and multiple platforms

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9794461B2 (en) * 2015-01-13 2017-10-17 Samsung Electronics Co., Ltd. Camera activation and illuminance
US11412382B2 (en) * 2019-11-07 2022-08-09 Humans, Inc Mobile application camera activation and de-activation based on physical object location

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851172A (en) * 2017-03-27 2017-06-13 青岛海信电器股份有限公司 The call method of camera, apparatus and system
KR20200070611A (en) * 2018-12-10 2020-06-18 (주)테슬라시스템 Formation and Application Method of Virtual Camera Image for Smartphone
WO2022012418A1 (en) * 2020-07-16 2022-01-20 华为技术有限公司 Photographing method and electronic device
CN113965809A (en) * 2020-11-30 2022-01-21 深圳卡多希科技有限公司 Method and device for simultaneous interactive live broadcast based on single terminal and multiple platforms
CN112804459A (en) * 2021-01-12 2021-05-14 杭州星犀科技有限公司 Image display method and device based on virtual camera, storage medium and electronic equipment
CN113448648A (en) * 2021-06-29 2021-09-28 技德技术研究所(武汉)有限公司 Linux-compatible Android camera access method and device

Also Published As

Publication number Publication date
CN114449171A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US11385857B2 (en) Method for displaying UI component and electronic device
JP2023514631A (en) Interface layout method, apparatus and system
CN111367456A (en) Communication terminal and display method in multi-window mode
CN111225108A (en) Communication terminal and card display method of negative screen interface
CN112527222A (en) Information processing method and electronic equipment
CN112420217A (en) Message pushing method, device, equipment and storage medium
WO2022161119A1 (en) Display method and electronic device
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN111176766A (en) Communication terminal and component display method
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
CN111031377B (en) Mobile terminal and video production method
CN116095413B (en) Video processing method and electronic equipment
CN113254132B (en) Application display method and related device
CN113467663B (en) Interface configuration method, device, computer equipment and medium
CN111324255B (en) Application processing method based on double-screen terminal and communication terminal
CN111163220B (en) Display method, communication terminal and computer storage medium
CN113157092A (en) Visualization method, terminal device and storage medium
CN111159734A (en) Communication terminal and multi-application data inter-access processing method
CN113542711A (en) Image display method and terminal
CN113641533B (en) Terminal and short message processing method
CN115334239B (en) Front camera and rear camera photographing fusion method, terminal equipment and storage medium
CN114143456B (en) Photographing method and device
CN111142648B (en) Data processing method and intelligent terminal
CN113253905B (en) Touch method based on multi-finger operation and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant