CN114449171A - Method for controlling camera, terminal device, storage medium and program product - Google Patents

Method for controlling camera, terminal device, storage medium and program product Download PDF

Info

Publication number
CN114449171A
CN114449171A CN202210108459.8A CN202210108459A CN114449171A CN 114449171 A CN114449171 A CN 114449171A CN 202210108459 A CN202210108459 A CN 202210108459A CN 114449171 A CN114449171 A CN 114449171A
Authority
CN
China
Prior art keywords
camera
angle
image
cameras
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210108459.8A
Other languages
Chinese (zh)
Other versions
CN114449171B (en
Inventor
冯坤
聂和平
周春萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202210108459.8A priority Critical patent/CN114449171B/en
Publication of CN114449171A publication Critical patent/CN114449171A/en
Application granted granted Critical
Publication of CN114449171B publication Critical patent/CN114449171B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application discloses a method for controlling a camera, a terminal device, a storage medium and a program product, which are used for solving the problem that the function and the application of the camera in a single-front-camera device cannot be adapted. Firstly, responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of terminal equipment; if the number of the physical cameras is one and the physical cameras are front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and logic cameras, the logic cameras are rear cameras, and parameters of the rear cameras are the same as those of the front cameras; and then, configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, wherein the camera calling interface is used for supporting an upper layer application to call the front camera or the rear camera. In conclusion, the virtual logic camera is established to replace the rear camera, so that the camera function can be normally used, and the use experience of a user is improved.

Description

Method for controlling camera, terminal device, storage medium, and program product
Technical Field
The present application belongs to the technical field of intelligent terminals, and in particular, relates to a method for controlling a camera, a terminal device, a storage medium, and a program product.
Background
With the smart phone using the photographing function more and more, the good photographing experience is more and more highly valued by the user.
In the prior art, the mobile phone models are two or more than two cameras, including a front camera and a rear camera, and downloadable applications are designed for the models. When the applications are installed on a machine type with only one front camera and no rear camera, the camera function cannot be normally used. For example, when scanning a two-dimensional code or recognizing a human face, the camera cannot be started and the angle of the camera is deflected.
Therefore, how to realize the adaptation of the camera function and the application in the single-front-facing camera device needs to be solved.
Disclosure of Invention
The application aims to provide a method for controlling a camera, a terminal device, a storage medium and a program product, which are used for solving the problem that the function and the application of the camera in a single-front-camera device cannot be adapted.
In a first aspect, the present application provides a method for controlling a camera, the method including:
responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of the terminal equipment;
if the number of the physical cameras is one and the physical cameras are front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and a logic camera, the logic camera is a rear camera, and the parameters of the rear camera are the same as those of the front cameras;
and configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, wherein the camera calling interface is used for supporting upper-layer application to call the front camera or the rear camera.
In some embodiments, the creating a dual-camera object specifically includes:
configuring the identifier of the front camera as a first identifier, configuring the orientation attribute of the front camera as a first attribute and recording the camera parameters of the front camera; and
and configuring the mark of the rear camera as a second mark, configuring the orientation attribute of the rear camera as a second attribute, and recording the camera parameter of the front camera as the camera parameter of the rear camera.
In some embodiments, the method further comprises:
calling the front camera to acquire an image in response to calling the front camera or calling the rear camera;
if the front camera is called to collect the image, the collected image is subjected to first image conversion based on the imaging characteristics of the front camera and then output and displayed;
and if the rear camera is called to collect the image, performing second image conversion on the collected image based on the imaging characteristic of the rear camera, and outputting and displaying the converted image.
In some embodiments, the performing, after the first image conversion, the output display on the acquired image based on the imaging characteristic of the front camera specifically includes:
acquiring a rotation angle of the terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
and controlling the collected image to rotate by a specified clockwise angle based on the rotation angle, and then displaying the upper left corner of the display screen as the upper left corner of the image.
In some embodiments, the controlling the rotation of the acquired image by a specified clockwise angle based on the rotation angle specifically includes:
if the terminal equipment rotates clockwise by a first angle, the acquired image is rotated clockwise by a second angle for displaying; the first angle is within a first specified range with 0 degrees as a reference, and the second angle is within a second specified range with 90 degrees as a reference;
if the terminal equipment rotates clockwise by a third angle, the acquired image is rotated clockwise by a fourth angle for displaying; the third angle is within a third specified range with 90 degrees as a reference, and the fourth angle is within a fourth specified range with 180 degrees as a reference;
if the terminal equipment rotates clockwise by a fifth angle, the acquired image is rotated clockwise by a sixth angle for displaying; the fifth angle is within a fifth specified range with 180 degrees as a reference, and the sixth angle is within a sixth specified range with 270 degrees as a reference;
if the terminal equipment rotates clockwise by a seventh angle, the acquired image is rotated clockwise by an eighth angle for display; the seventh angle is within a seventh prescribed range with 270 degrees as a reference, and the eighth angle is within an eighth prescribed range with 360 degrees as a reference.
In some embodiments, the performing, after the second image conversion, the output display on the acquired image based on the imaging characteristic of the rear camera specifically includes:
acquiring a rotation angle of the terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
and controlling the acquired image to rotate by a specified clockwise angle based on the rotation angle, and turning the acquired image left and right after rotation for displaying.
In some embodiments, the specified clockwise angle is determined using the following formula:
y=(a+x)%b
wherein y represents the specified clockwise angle, x represents the rotation angle, and a and b are both constants.
In a second aspect, the present application further provides a terminal device:
a display for displaying the acquired image;
a memory for storing executable instructions of the processor;
a processor for executing the executable instructions to implement any of the methods as provided in the first aspect of the application.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium, where instructions, when executed by a processor of a terminal device, enable the terminal device to perform any one of the methods as provided in the first aspect of the present application.
In a fourth aspect, an embodiment of the present application provides a computer program product comprising a computer program that, when executed by a processor, performs any of the methods as provided in the first aspect of the present application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the application establishes the virtual logic camera to replace the rear camera, and simultaneously carries out corresponding adaptation according to the difference of upper application, has compatible upper application and the function of making a video recording of single camera equipment, makes it can the normal use scan function of shooing. In conclusion, when the user uses the scheme provided by the application, the problem of software adaptation of the equipment with the single front-facing camera is solved, and the use experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application. On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a software architecture of a terminal according to an embodiment of the present application;
fig. 3 is a schematic application interface diagram of a method for controlling a camera according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for controlling a camera according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart illustrating that an application calls a front-facing camera to adapt an acquired image to a display screen according to the embodiment of the present application;
fig. 6 is a schematic view of angle conversion of a terminal device, a collected image, a displayed image, and a displayed effect provided in the embodiment of the present application;
fig. 7 is a schematic flowchart illustrating a process of controlling display of an acquired image on a display screen based on a rotation angle according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a process of adapting an acquired image to a display screen by using an application to call a rear camera according to an embodiment of the present application;
fig. 9 is a schematic view of another angle conversion of the terminal device, the acquired image, the display image, and the display effect provided in the embodiment of the present application;
fig. 10 is a schematic overall architecture diagram of a method for controlling a camera according to an embodiment of the present application;
fig. 11 is an overall flowchart of a method for controlling a camera according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates an inclusive meaning unless otherwise specified, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first", "second", may explicitly or implicitly include one or more of that feature.
First, fig. 1 shows a schematic structural diagram of a terminal 100.
The following describes an embodiment specifically by taking the terminal 100 as an example. It should be understood that the terminal 100 shown in fig. 1 is merely an example, and that the terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of the terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the terminal 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal 100 to operate. The memory 120 may store an operating system and various application programs, and may also store program codes for performing the methods described in the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front surface of the terminal 100 and may collect touch operations of a user thereon or nearby, such as starting a camera, closing the camera, clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 100. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the terminal 100. The display 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be configured to display an interface for a user to start a camera for photographing and scanning as described in this application.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The terminal 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, etc.
Audio circuitry 160, speaker 161, and microphone 162 may provide an audio interface between a user and terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal 100 can help a user to send and receive e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user.
The processor 180 is a control center of the terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, and a touch response, as well as the method described in the embodiments of the present application. Further, the processor 180 is coupled with the display unit 130.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal 100 may establish a bluetooth connection with a wearable terminal device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.
The terminal 100 also includes a power supply 190 (e.g., a battery) to power the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal 100 may also be configured with power buttons for powering the terminal on and off, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the terminal 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer, from top to bottom, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the application framework layer can be divided into a java side and a native side, wherein the java side includes a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, an application manager, and the like.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, short messages, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the interface including the front camera of the control list and the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application.
The notification manager allows the application to display notification information (e.g., message digest of short message, message content) in the status bar, can be used to convey notification-type messages, and can automatically disappear after a short dwell without user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal vibrates, an indicator light flashes, and the like.
And the service of the native side is positioned at the native side of the application program framework layer and is adjacent to the system library.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and camera services, among others.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
A 2D (one way of animation) graphics engine is a drawing engine for 2D drawing.
The camera service is used for common logical camera objects, and corresponding parameter information and the like are configured for the camera objects.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the terminal 100 software and hardware in connection with capturing a photo scene.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 140.
The terminal 100 in the embodiment of the present application may be a terminal device with a single front camera, such as a mobile phone, a tablet computer, a wearable device, a notebook computer, and a television. The method for controlling the camera provided by the present application will be described with reference to the following embodiments.
The inventive concept of the present application can be summarized as follows: firstly, responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of terminal equipment; if the number of the physical cameras is one and the physical cameras are front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and logic cameras, the logic cameras are rear cameras, and parameters of the rear cameras are the same as those of the front cameras; and then, configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface so as to support the upper-layer application to call the front camera or the rear camera. To sum up, this application embodiment can be with under present technical condition, establish virtual logic camera and replace rearmounted camera, carry out corresponding adaptation according to the difference of upper application simultaneously, compatible the equipment of upper application with single camera, make it can the normal use scan function of shooing, improve user's use and experience.
After the main inventive concepts of the embodiments of the present application are introduced, some simple descriptions are provided below for application scenarios to which the technical solutions of the embodiments of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiments of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Fig. 3 is a schematic interface diagram illustrating a user opening a camera for photographing and scanning according to an embodiment of the present disclosure. The interface is an interface for opening a camera shooting function, and comprises functions of shooting, portrait and the like, the equipment only comprises a front-facing camera, if an upper-layer application calls a rear-facing camera, a logic camera is called to replace the rear-facing camera, and appropriate adaptation is carried out when a display screen displays the camera, so that the compatibility of the application and the equipment is realized.
Of course, the usage scenario provided in the embodiment of the present application is not limited to the application scenario shown in fig. 3, and may also be used in other possible application scenarios, which is not limited in the embodiment of the present application.
Based on the above description, an embodiment of the present application provides a method for controlling a camera, where a key workflow diagram of the method is shown in fig. 4, and may include the following:
in step 401, in response to a camera start instruction, the number of physical cameras and camera parameter information of the terminal device are acquired.
In some embodiments, in a terminal device including two physical cameras, a front camera and a rear camera, the camera identifier and the attribute are specifically as shown in table 1 below:
TABLE 1
Physical hardware Camera ID Faring attribute
Rear camera 0 back
Front camera 1 front
The terminal device comprises two physical cameras, so that two camera objects are created, namely a front camera and a rear camera. The Camera ID corresponding to the front Camera is 1, the Camera ID corresponding to the rear Camera is 0, and the Facing attribute, namely the Facing attribute, corresponds to front and back respectively.
In step 402, if the number of the physical cameras is one and the physical cameras are front cameras, a dual-camera object is created, the dual-camera object includes the front cameras and the logic cameras, the logic cameras are rear cameras, and the camera parameters of the rear cameras are configured to be the same as those of the front cameras.
For a terminal device including only one front camera, since the prior art does not adapt according to the device, two dual-camera objects still need to be created in order to adapt to the requirements of different upper-layer applications. The camera identifier and attributes of the terminal device originally including only one front camera are specifically shown in the following table 2:
TABLE 2
Physical hardware Camera ID Faring attribute
Front camera 1 front
Wherein there is only one front facing camera object.
The method includes the steps that when a Camera service is started, the number of cameras on physical Hardware is enumerated, the Camera service is located in a Hardware Abstraction Layer, namely a HAL Layer, and corresponds to a system library in fig. 2, after only one preposed physical Camera is determined, a double-Camera object is created, the double-Camera object comprises the preposed Camera and a logic Camera, the virtual logic Camera is used as a rear Camera, and the Camera identification and the attribute of the terminal equipment adopting the method provided by the application are specifically shown in the following table 3:
TABLE 3
Physical hardware Camera ID Faring attribute
Front camera 0 back
Front camera 1 front
In order to adapt to the upper layer application, as shown in table 3 in the embodiment of the present application, the front-facing camera has two identities, one is the front-facing camera, and the other is the logic camera (i.e., the virtual rear-facing camera). In table 3, the attributes "physical hardware" are both "front camera", which means that the physical hardware of two cameras (i.e., the front camera and the rear camera) are both front cameras. The Camera ID is used to indicate the identification of the Camera. Configuring the identifier of the called front Camera as a first identifier, namely the Camera ID in table 3 is 1, configuring the orientation attribute of the front Camera as a first attribute, namely the facility attribute is front, and recording the Camera parameters of the front Camera (not shown in the table); and configuring the identifier of the logical Camera (i.e., the rear Camera) as a second identifier, that is, the Camera ID in table 3 is 0, configuring the orientation attribute of the logical Camera (i.e., the rear Camera) as a second attribute, that is, the Facing attribute is back, and recording the Camera parameter of the front Camera as the Camera parameter of the rear Camera, that is, the parameters such as the size of the real front Camera and the rear logical Camera are the same, and all the parameters are called physical front cameras during actual calling.
After the creation of the logical Camera object is completed, the calling relationship between the two Camera objects and the real physical Camera is established, the drive of the real physical Camera is positioned at the lower layer of the HAL layer where the Camera service is positioned, namely the kernel layer in FIG. 2, and then the adaptation of the calling interface of the following Camera is carried out after the calling relationship between the two Camera objects and the real physical Camera is established.
In step 403, configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, where the camera calling interface is used to support the upper layer application to call the front camera or the rear camera.
In some embodiments, the Camera service reports the Camera combination information, including call relationship, identifier, attribute, parameter, and the like, to a Camera frame interface, and configures the Camera combination information according to the Camera parameters of the front Camera and the Camera parameters of the rear Camera when the interface configures the Camera related information. It should be noted that the Camera frame interface is located on the java side of the application framework layer in fig. 2.
If the target Camera is called by the upper layer application, the upper layer application reads the table 3 through the Camera frame interface, so that the calling relation and the parameter information of the target Camera are obtained, and the target Camera is called by the upper layer application finally. For example, if the target Camera called by the upper layer application is a rear Camera, the upper layer application reads the table 3 through the Camera frame interface, so as to obtain the call relationship and parameter information of the rear Camera, and finally realize the call of the upper layer application to the rear Camera according to the call relationship.
Under the existing android system, the interface of the upper layer application calling Camera is divided into two modes, namely a v1 interface and a v2 interface.
1. For the interface of v1, the application acquires the camera information through the interface of the getInfo, so the camera parameters of the front camera and the camera parameters of the rear camera need to be configured in the interface of the getInfo, so that the information of the two cameras can be acquired correctly.
2. For the interface of v2, the application acquires the camera information by acquiring meta data (metadata), so it is necessary to correctly arrange the camera information at the position where the meta data is generated.
In addition, with technology updating, if there are other extension interfaces, correct configuration of the camera information needs to be performed one by one for different extension interfaces, so that the front-facing camera and the virtual rear-facing camera can be called through the extension interfaces.
After the configuration of the Camera calling interface is completed, if the Camera frame interface is called by an application, the single front-facing Camera equipment can be called according to the condition that the single front-facing Camera equipment comprises two Camera equipment, the front-facing Camera and the rear-facing Camera are respectively opened, but after a calling instruction is issued, no matter whether the calling instruction calls the front-facing Camera or the rear-facing Camera, the front-facing Camera in the physical sense is actually opened.
In some embodiments, since the rear-facing camera is a virtual logic camera created in this application, the front-facing camera is actually used, and image data acquired by the front-facing camera is different from image data acquired by a real rear-facing camera when displayed on a display screen, for example, the image data acquired by the front-facing camera should be displayed in a mirror image when displayed on the display screen, and then, for example, the terminal device rotates by 90 degrees and an image displayed on the display screen needs to be correspondingly rotated so as to match an angle observed by human eyes, or the terminal device also needs to be correspondingly rotated when the terminal device is horizontally arranged and the image displayed on the display screen.
Therefore, in order to accurately preview the image, the image output by the camera needs to be accurately displayed to the user on the display screen, and the image acquired by the camera is adapted to the display screen.
In some embodiments, if the application calls the front camera to acquire the image, the acquired image is output and displayed after being subjected to first image conversion based on the imaging characteristics of the front camera. The specific flow diagram as follows in fig. 5 can be implemented as:
in step 501, the rotation angle of the terminal device is acquired.
It should be noted that, in the present application, the rotation angle may be obtained by a system gyroscope sensor, and the rotation angle is a rotation angle of a plane where a display screen of the terminal device is located, if the device gyroscope is turned on, as shown in fig. 6, the display screen may be switched in four directions, and a corresponding rendered coordinate origin may be switched at four corners of the physical screen, where a vertex of the 4 corners is defined as A, B, C, D in the present application. Since the origin of coordinates is the upper left corner of the screen display, the origin of coordinates represented by the black dot in the upper left corner in fig. 6 will be converted in A, B, C, D, the black arrow marks the positive direction of the camera, the rotation angle of the terminal device can be distinguished conveniently according to the black arrow mark, and the transformation relation between the collected image and the display direction can be determined conveniently. As can be seen from fig. 6, the clockwise rotation angles of the terminal device are generally 0 degree, 90 degrees, 180 degrees, and 270 degrees, respectively, but are not limited to the above rotation angles.
It should be noted that the black arrows in fig. 6 indicate the positive direction of the camera, which is the same direction with respect to the terminal device, i.e. the direction of the camera is perpendicular to the direction of the transverse terminal device.
In step 502, after the collected image is rotated by a specified clockwise angle based on the rotation angle control, the upper left corner of the display screen is displayed as the upper left corner of the image.
In some embodiments, after the captured image is controlled to rotate by a specified clockwise angle based on the rotation angle, the upper left corner of the display screen is displayed as the upper left corner of the image, specifically, the captured image is shown in the second column of fig. 6 (including (1), (2), (3), (4)), and the captured image is based on the display mode in the horizontal screen and needs to be mirrored to the captured object for using the front camera, so the mode of the front camera capturing the rotation angle of the adaptive terminal of the image can be implemented as shown in fig. 7, including:
in step 701, if the terminal device rotates clockwise by a first angle, the acquired image is rotated clockwise by a second angle for display.
Since it is considered that the user does not need to accurately control the accuracy of the rotation angle, the first angle in the embodiment of the present application is within a first specified range with reference to 0 degrees (the end case shown in case (1) in fig. 6 is the first angle), and the second angle is within a second specified range with reference to 90 degrees (the rotation angle for the captured image is 90 degrees in case (1) in fig. 6 is the second angle).
In step 702, if the terminal device rotates clockwise by a third angle, the acquired image is rotated clockwise by the fourth angle for displaying. Also, since it is considered that the user does not need to accurately control the accuracy of the rotation angle, the third angle is within a third specified range with reference to 90 degrees (the third angle is 90 degrees clockwise as the terminal is rotated in case (2) of fig. 6), and the fourth angle is within a fourth specified range with reference to 180 degrees (the fourth angle is 180 degrees clockwise as the image is rotated in case (2) of fig. 6).
In step 703, if the terminal device rotates clockwise by a fifth angle, the acquired image is rotated clockwise by a sixth angle for displaying, and also because it is considered that the user does not need to accurately control the precision of the rotation angle, in this embodiment of the present application, the fifth angle is within a fifth specified range based on 180 degrees (as in the case (3) of fig. 6, the clockwise 180 degrees of the terminal rotation is the fifth angle), and the sixth angle is within a sixth specified range based on 270 degrees (as in the case (3) of fig. 6, the clockwise 270 degrees of the image is the sixth angle).
In step 704, if the terminal device rotates clockwise by a seventh angle, the acquired image is rotated clockwise by an eighth angle for displaying, also considering that the user does not need to accurately control the precision of the rotation angle, so that the seventh angle is within a seventh specified range based on 270 degrees in the embodiment of the present application (for example, 270 degrees clockwise when the terminal rotates in case (4) of fig. 6 is the seventh angle), and the eighth angle is within an eighth specified range based on 360 degrees (for example, 360 degrees clockwise when the image rotates in case (4) of fig. 6 is the sixth angle).
As shown in fig. 6, the relative positions of the four vertices A, B, C, D of the image captured by the front camera of the terminal device are unchanged, and corresponding adjustment and adaptation are performed when the display screen displays according to the rotation angle of the terminal device. In the case (1) included in fig. 6, the captured image is displayed by being rotated clockwise by 90 degrees by the clockwise rotation of the terminal device by 0 degree, in the case (2) included in the second dashed line frame, the captured image is displayed by being rotated clockwise by 180 degrees by the clockwise rotation of the terminal device by 90 degrees, in the case (3) of fig. 6, the captured image is displayed by being rotated clockwise by 270 degrees, and in the case (4) of fig. 6, the captured image is displayed by being rotated clockwise by 360 degrees. According to the method and the device, the image displayed on the display screen is adjusted and adapted according to the rotation angle of the terminal device, and the use experience of a user is better improved.
To summarize, invoking the rotation operation of the front camera on the image can be implemented based on equation (1):
the present application uses the following formula (1) to determine the specified clockwise angle:
y=(a+x)%b (1)
wherein y represents a specified clockwise angle, x represents a rotation angle, and a and b are both constants. % represents the operation of taking the remainder, a can be taken as 90, b can be taken as 360, and the formula for specifying the clockwise angle is determined as shown in the following formula (2):
y=(90+x)%360 (2)
in other embodiments, if the application calls the rear camera to acquire an image, the acquired image is output and displayed after being subjected to second image conversion based on the imaging characteristics of the front camera. Different from the mirror image display of the image collected by the front camera, the orientation of the image collected by the rear camera theoretically is consistent with that of the image perceived by the user, and the image collected by the front camera is a mirror image, so that the virtual rear camera needs to perform mirror image turning processing on the collected image to achieve the effect of visually collecting the image by the rear camera. For example, the images obtained by photographing with the front camera are mirror images of the object to be photographed theoretically, and the display effect of the rear camera does not need mirror images, so that mirror image turning processing needs to be performed on the images when the virtual rear camera is called. Therefore, the process of adapting the rear camera can be implemented as shown in fig. 8:
in step 801, the rotation angle of the terminal device is acquired.
In step 802, the captured image is controlled to rotate by a specified clockwise angle based on the rotation angle, and the rotated captured image is displayed by being flipped left and right.
The designated clockwise angle is determined by the same equation (1):
y=(a+x)%b (1)
wherein y represents a specified clockwise angle, x represents a rotation angle, and a and b are both constants. a may be taken as 90 and b may be taken as 360, then the formula for specifying the clockwise angle is determined as shown in the following formula (2):
y=(90+x)%360 (2)
as shown in fig. 9, the relative positions of the four vertices A, B, C, D of the image captured by the front camera of the terminal device are unchanged, and corresponding adjustment and adaptation are performed when the display screen displays according to the rotation angle of the terminal device. In the case (1) included in fig. 9, when the terminal device is rotated clockwise by 0 degree, the captured image is rotated clockwise by 90 degrees, and the image is displayed while being turned left and right. In the case (2) included in fig. 9, when the terminal device is rotated 90 degrees clockwise, the captured image is rotated 180 degrees clockwise, and the image is displayed while being turned left and right. In the case (3) included in fig. 9, when the terminal device is rotated clockwise by 180 degrees, the captured image is rotated clockwise by 270 degrees, and the image is displayed while being turned left and right. In the case (4) included in fig. 9, if the terminal device is rotated by 270 degrees clockwise, the captured image is rotated by 360 degrees clockwise, and the image is displayed while being turned left and right. Thus, in contrast to the case of calling the front camera in fig. 6, when the front camera is finally displayed in fig. 9, the hairpin position changes, which indicates that mirror inversion is required when the upper layer should call the rear camera (i.e., the logic camera) relative to calling the front camera.
The image to be displayed is uploaded to the upper application after the angle rotation is completed, the correct display angle of the application display interface is achieved, the use habit of a user is facilitated, and the use experience of the user is improved.
It should be added that the overall architecture diagram of the method for controlling the camera provided by the present application is shown in fig. 10:
the application is located in an application layer in fig. 2, a Camera interface is located on a java side of an application program framework layer in fig. 2 and used for application calling, a single front-facing Camera is a hardware layer of the terminal device, Camera services are located in a system library in fig. 2, two logic Camera objects are created in the Camera interface, the acquired images are adapted to the display screen through an angle adjusting module, and finally the images are displayed on the display screen.
In some embodiments, the overall flowchart of the method for controlling a camera provided by the present application is as follows, and may be implemented as:
in step 1101, in response to a camera activation instruction, a camera service is activated.
In step 1102, the number of cameras and parameters are read based on the camera service.
In step 1103, if a front camera is read, a front camera object and a logic camera object are created, and the logic camera corresponding to the logic camera object is used as the rear camera. Based on the method, the terminal equipment with the single front-facing camera comprises two camera objects, namely the front-facing camera object and the logic camera object, wherein the logic camera object is adaptive to the calling of the upper-layer application to the rear-facing camera, and the front-facing camera object is adaptive to the calling of the upper-layer application to the front-facing camera.
In step 1104, the physical front-facing camera parameters are used to configure the front-facing camera objects and the logical camera objects. In the configuration process, the two camera objects are configured by using the parameter information of the physical front camera, for example, the size of the physical front camera is configured to the size of the two cameras. In addition, the identification and orientation attributes of the logical camera object and the front camera object are different, as shown in the foregoing table 3, the identification and orientation attributes of the front camera object are 1 and front, respectively, and the identification and orientation attributes of the logical camera object are 0 and back, respectively.
In step 1105, a call relationship is established between two camera objects and a physical camera. When two camera objects are configured, the present application establishes a calling relationship between the two camera objects and the physical camera at the same time, as shown in table 3 above, the corresponding relationship between the physical hardware and the camera objects is the calling relationship, and since the terminal device only includes one front-facing camera, if the upper-layer application calls the camera, the calling is actually the physical front-facing camera.
In step 1106, the relevant information of the camera call interface is configured based on the parameters and the call relationship of the front camera object and the logical camera object. The related information comprises a calling relation between the camera object and physical hardware and also comprises parameter information of the two camera objects. The upper application calls the camera through the camera call interface, and in order to facilitate the camera call interface to call the relevant camera, the relevant information required by the camera call interface is configured into the corresponding interface, namely the information contained in the table 3 is configured into the relevant camera call interface. For example, if the upper application calls the rear Camera (i.e., the logic Camera) to scan the two-dimensional code, the upper application reads the table 3 through a call interface (e.g., a Camera frame interface) of the rear Camera (i.e., the logic Camera) to obtain a call relationship of the rear Camera, the call relationship read by the table 3 is that the physical Camera corresponding to the rear Camera is the physical front Camera, and the physical front Camera is called according to the call relationship, so that the upper application calls the rear Camera, and the final code scanning operation is completed. Therefore, the application realizes the adaptation of the upper application and the code scanning function. Based on the above embodiment steps, the embodiment of the application can establish the virtual logic camera to replace the rear camera under the current technical condition, and simultaneously, the virtual logic camera carries out corresponding adaptation according to the difference of the upper application, so that the device compatible with the upper application and the single camera can normally use the photographing and scanning function, and the use experience of the user is improved.
In an exemplary embodiment, the present application also provides a computer-readable storage medium comprising instructions, such as the memory 120 comprising instructions, which are executable by the processor 180 of the terminal device 100 to perform the above-described method of controlling a camera. Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, which may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by the processor 180, implements the method of controlling a camera as provided herein.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of controlling a camera, the method comprising:
responding to a camera starting instruction, and acquiring the number of physical cameras and camera parameter information of the terminal equipment;
if the number of the physical cameras is one and the physical cameras are front cameras, creating a double-camera object, wherein the double-camera object comprises the front cameras and a logic camera, the logic camera is used as a rear camera, and the camera parameters of the rear camera are the same as those of the front camera;
and configuring the camera parameters of the front camera and the camera parameters of the rear camera into a camera calling interface, wherein the camera calling interface is used for supporting an upper layer application to call the front camera or the rear camera.
2. The method according to claim 1, wherein the creating a dual-camera object specifically comprises:
configuring the identifier of the front camera as a first identifier, configuring the orientation attribute of the front camera as a first attribute and recording the camera parameters of the front camera; and
and configuring the mark of the rear camera as a second mark, configuring the orientation attribute of the rear camera as a second attribute, and recording the camera parameter of the front camera as the camera parameter of the rear camera.
3. The method of claim 1, further comprising:
calling the front camera to acquire an image in response to the calling of the front camera or the calling of the rear camera;
if the front camera is called to collect the image, the collected image is subjected to first image conversion based on the imaging characteristics of the front camera and then output and displayed;
and if the rear camera is called to collect the image, performing second image conversion on the collected image based on the imaging characteristic of the rear camera, and outputting and displaying the converted image.
4. The method according to claim 3, wherein the first image conversion of the acquired image based on the imaging characteristics of the front camera is performed and then output and displayed, specifically comprising:
acquiring a rotation angle of the terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
and controlling the collected image to rotate by a specified clockwise angle based on the rotation angle, and then displaying the upper left corner of the display screen as the upper left corner of the image.
5. The method according to claim 4, wherein the controlling the rotation of the acquired image by a specified clockwise angle based on the rotation angle comprises:
if the terminal equipment rotates clockwise by a first angle, the acquired image is rotated clockwise by a second angle for displaying; the first angle is within a first specified range with 0 degrees as a reference, and the second angle is within a second specified range with 90 degrees as a reference;
if the terminal equipment rotates clockwise by a third angle, the acquired image is rotated clockwise by a fourth angle for displaying; the third angle is within a third specified range with 90 degrees as a reference, and the fourth angle is within a fourth specified range with 180 degrees as a reference;
if the terminal equipment rotates clockwise by a fifth angle, the acquired image is rotated clockwise by a sixth angle for displaying; the fifth angle is within a fifth specified range based on 180 degrees, and the sixth angle is within a sixth specified range based on 270 degrees;
if the terminal equipment rotates clockwise by a seventh angle, the acquired image is rotated clockwise by an eighth angle for display; the seventh angle is within a seventh prescribed range with 270 degrees as a reference, and the eighth angle is within an eighth prescribed range with 360 degrees as a reference.
6. The method according to claim 4, wherein the second image conversion of the acquired image based on the imaging characteristics of the rear camera is performed and then output and displayed, specifically comprising:
acquiring a rotation angle of the terminal equipment; the rotation angle is the rotation angle of the plane where the display screen of the terminal equipment is located;
and controlling the acquired image to rotate by a specified clockwise angle based on the rotation angle, and turning the acquired image left and right after rotation for displaying.
7. The method of claim 4 or 6, wherein the specified clockwise angle is determined using the following formula:
y=(a+x)%b
wherein y represents the specified clockwise angle, x represents the rotation angle, and a and b are both constants.
8. A terminal device, comprising:
a display for displaying the acquired image;
a memory for storing executable instructions of the processor;
a processor for executing the executable instructions to implement the steps of the method of controlling a camera head as claimed in any one of claims 1-7.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of a terminal device, enable the terminal device to perform the steps of the method of controlling a camera of any one of claims 1-7.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of controlling a camera of any one of claims 1-7 when executed by a processor.
CN202210108459.8A 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product Active CN114449171B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210108459.8A CN114449171B (en) 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210108459.8A CN114449171B (en) 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product

Publications (2)

Publication Number Publication Date
CN114449171A true CN114449171A (en) 2022-05-06
CN114449171B CN114449171B (en) 2023-10-27

Family

ID=81372033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210108459.8A Active CN114449171B (en) 2022-01-28 2022-01-28 Method for controlling camera, terminal device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114449171B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079886A (en) * 2022-07-21 2022-09-20 荣耀终端有限公司 Two-dimensional code recognition method, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205301A1 (en) * 2015-01-13 2016-07-14 Samsung Electronics Co., Ltd. Camera activation and illuminance
CN106851172A (en) * 2017-03-27 2017-06-13 青岛海信电器股份有限公司 The call method of camera, apparatus and system
KR20200070611A (en) * 2018-12-10 2020-06-18 (주)테슬라시스템 Formation and Application Method of Virtual Camera Image for Smartphone
US20210144552A1 (en) * 2019-11-07 2021-05-13 Humans, Inc (dba Flip Fit) Mobile application camera activation and de-activation based on physical object location
CN112804459A (en) * 2021-01-12 2021-05-14 杭州星犀科技有限公司 Image display method and device based on virtual camera, storage medium and electronic equipment
CN113448648A (en) * 2021-06-29 2021-09-28 技德技术研究所(武汉)有限公司 Linux-compatible Android camera access method and device
WO2022012418A1 (en) * 2020-07-16 2022-01-20 华为技术有限公司 Photographing method and electronic device
CN113965809A (en) * 2020-11-30 2022-01-21 深圳卡多希科技有限公司 Method and device for simultaneous interactive live broadcast based on single terminal and multiple platforms

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205301A1 (en) * 2015-01-13 2016-07-14 Samsung Electronics Co., Ltd. Camera activation and illuminance
CN106851172A (en) * 2017-03-27 2017-06-13 青岛海信电器股份有限公司 The call method of camera, apparatus and system
KR20200070611A (en) * 2018-12-10 2020-06-18 (주)테슬라시스템 Formation and Application Method of Virtual Camera Image for Smartphone
US20210144552A1 (en) * 2019-11-07 2021-05-13 Humans, Inc (dba Flip Fit) Mobile application camera activation and de-activation based on physical object location
WO2022012418A1 (en) * 2020-07-16 2022-01-20 华为技术有限公司 Photographing method and electronic device
CN113965809A (en) * 2020-11-30 2022-01-21 深圳卡多希科技有限公司 Method and device for simultaneous interactive live broadcast based on single terminal and multiple platforms
CN112804459A (en) * 2021-01-12 2021-05-14 杭州星犀科技有限公司 Image display method and device based on virtual camera, storage medium and electronic equipment
CN113448648A (en) * 2021-06-29 2021-09-28 技德技术研究所(武汉)有限公司 Linux-compatible Android camera access method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079886A (en) * 2022-07-21 2022-09-20 荣耀终端有限公司 Two-dimensional code recognition method, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114449171B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN111367456A (en) Communication terminal and display method in multi-window mode
CN111225108A (en) Communication terminal and card display method of negative screen interface
CN111240546A (en) Split screen processing method and communication terminal
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN111031377B (en) Mobile terminal and video production method
CN113055585B (en) Thumbnail display method of shooting interface and mobile terminal
CN113038141A (en) Video frame processing method and electronic equipment
CN113254132B (en) Application display method and related device
CN112799557B (en) Ink screen display control method, terminal and computer readable storage medium
CN111163220B (en) Display method, communication terminal and computer storage medium
CN111324255B (en) Application processing method based on double-screen terminal and communication terminal
CN113642010B (en) Method for acquiring data of extended storage device and mobile terminal
CN114596633A (en) Sitting posture detection method and terminal
CN113542711A (en) Image display method and terminal
CN113157092A (en) Visualization method, terminal device and storage medium
CN111159734A (en) Communication terminal and multi-application data inter-access processing method
CN113253905B (en) Touch method based on multi-finger operation and intelligent terminal
CN115334239B (en) Front camera and rear camera photographing fusion method, terminal equipment and storage medium
CN114143456B (en) Photographing method and device
CN111142648B (en) Data processing method and intelligent terminal
CN113129238B (en) Photographing terminal and image correction method
CN111479075B (en) Photographing terminal and image processing method thereof
CN111258699B (en) Page display method and communication terminal
CN113641533B (en) Terminal and short message processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Applicant before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant