CN112995562A - Camera calling method and device, storage medium and terminal - Google Patents

Camera calling method and device, storage medium and terminal Download PDF

Info

Publication number
CN112995562A
CN112995562A CN201911279322.3A CN201911279322A CN112995562A CN 112995562 A CN112995562 A CN 112995562A CN 201911279322 A CN201911279322 A CN 201911279322A CN 112995562 A CN112995562 A CN 112995562A
Authority
CN
China
Prior art keywords
camera
calling
rear camera
candidate window
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911279322.3A
Other languages
Chinese (zh)
Inventor
吴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Coolpad Software Technology Co Ltd
Original Assignee
Nanjing Coolpad Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Coolpad Software Technology Co Ltd filed Critical Nanjing Coolpad Software Technology Co Ltd
Priority to CN201911279322.3A priority Critical patent/CN112995562A/en
Publication of CN112995562A publication Critical patent/CN112995562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application discloses a method, a device and a terminal for calling a camera, and belongs to the field of image acquisition. When the terminal detects the instruction that is used for transferring the rear camera, show the window that supplies the user to select in a plurality of rear cameras, the rear camera that the terminal transferred the user selection carries out image acquisition, avoids among the prior art fixed camera utilization ratio that uses in the rear camera and bring not high and the not enough problem of flexibility.

Description

Camera calling method and device, storage medium and terminal
Technical Field
The present application relates to the field of image acquisition, and in particular, to a method and an apparatus for calling a camera, a storage medium, and a terminal.
Background
Along with the user is increasingly normal to the demand of shooing, the camera is except leading camera and rear camera on the cell-phone, and the quantity of rear camera is probably two or more, and when the terminal utilized a plurality of rear camera, generally the image that utilizes main camera collection and the image of other vice camera collections carry out the synthesis and obtain clear image, and vice camera includes: the system comprises a long-focus camera, a short-focus camera, a macro camera and the like, wherein in the hong Kong technology, when a terminal uses a rear camera to carry out video call, a main camera in the rear camera is usually used for carrying out video call, and for the terminal provided with a plurality of rear cameras, the utilization rate of the rear camera is not high and the flexibility is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for calling a camera, a storage medium and a terminal, and can solve the problems of poor flexibility and low utilization rate of a rear camera caused by fixedly using a main camera to carry out video call in the related technology. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for calling a camera, where the method for calling the camera includes:
receiving a calling request of a camera; the calling request is used for calling a rear camera;
displaying the candidate window of the camera through a display screen; the camera candidate window comprises a plurality of rear camera tags, and each rear camera tag corresponds to one rear camera;
receiving a selection instruction aiming at the camera candidate window, and determining a corresponding target rear camera based on a rear camera label selected by the selection instruction;
and calling the target rear camera to acquire an image.
In a second aspect, an embodiment of the present application provides a device for calling a camera, where the device for calling a camera includes:
detecting a camera calling request; the camera calling request is used for calling a rear camera;
displaying the candidate window of the camera through a display screen; the camera candidate window comprises a plurality of camera tags, and each camera tag corresponds to a rear camera;
receiving a selection instruction aiming at a camera candidate window, and determining a target rear camera based on a camera tag selected by the selection instruction;
and calling the target rear camera to acquire an image.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a device for calling a camera, including: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
when a camera calling request for calling the rear camera is detected, a camera candidate window comprising a plurality of camera labels is displayed through a display unit, the terminal determines a corresponding target rear camera according to the camera label selected by a selection instruction, the target rear camera is called to collect images, any one of the plurality of rear cameras can be called according to the actual requirement of a user, different application scenes are met, and the problems that the utilization rate of the rear camera is not high and the flexibility is not enough due to the fact that a main camera can only be called to carry out video call in the prior art are solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 3 is an architectural diagram of the android operating system of FIG. 1;
FIG. 4 is an architecture diagram of the IOS operating system of FIG. 1;
fig. 5 is a schematic flowchart of a method for calling a camera according to an embodiment of the present application;
fig. 6 is another schematic flowchart of a method for calling a camera according to an embodiment of the present disclosure;
FIG. 7 is a diagram of a chat window provided by an embodiment of the application;
fig. 8 is a schematic diagram of a camera candidate window provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a video call interface provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of an invoking device of a camera according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a calling device of a camera according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The storage data area may also store data created by the terminal in use, such as a phonebook, audio-video data, chat log data, and the like.
Referring to fig. 2, the memory 120 may be divided into an operating system space, in which an operating system runs, and a user space, in which native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 3, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the terminal, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, a power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game-like application, an instant messaging program, a photo beautification program, a shopping program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 4, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the terminal. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework shown in FIG. 4, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. The touch display screen is generally provided at a front panel of the terminal. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the terminal described above. Optionally, the execution subject of each step is an operating system of the terminal. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The terminal of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. The user can view information such as displayed text, images, video, etc. using the display device on the terminal 101. The terminal may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the terminal shown in fig. 1, the processor 110 may be configured to call an application program stored in the memory 120, and specifically execute the method for calling the camera according to the embodiment of the present application.
According to the technical scheme, when a camera calling request for calling the rear camera is detected, a camera candidate window comprising a plurality of camera labels is displayed through a display unit, the terminal determines the corresponding target rear camera according to the camera label selected by a selection instruction, the target rear camera is called to collect images, any one of the plurality of rear cameras can be called according to the actual requirement of a user, different application scenes are met, and the problems that the utilization rate of the rear camera is not high and the flexibility is not enough due to the fact that only a main camera can be called to conduct video call in the prior art are solved.
In the following method embodiments, for convenience of description, only the main execution body of each step is described as a terminal.
In the following method embodiments, for convenience of description, only the main execution body of each step is described as a terminal.
The following describes in detail a method for calling a camera according to an embodiment of the present application with reference to fig. 5 to 10.
Referring to fig. 5, a schematic flow chart of a method for calling a camera according to an embodiment of the present application is provided, where the processing method includes the following steps:
s501, detecting a camera calling request.
The camera call request is used for calling a camera to take a picture or make a video call, the camera call instruction can be sent by a third-party application installed in the terminal, and the third-party application can be an instant messaging application, a payment application or a beauty application and the like. When the terminal is provided with the front camera and the rear camera, the camera calling request can be used for calling the front camera or the rear camera, and whether the camera calling instruction calls the front camera or the rear camera can be distinguished according to the camera flag bit in the camera calling request. For example: the camera flag bit is represented by 1 bit, when the bit is "1", it represents that the camera call instruction is used for calling the front camera, and when the bit is "0", it represents that the camera call instruction is used for calling the rear camera.
And S502, displaying the camera candidate window through the display screen.
When the camera calling instruction is a rear camera calling instruction, the terminal displays a camera candidate window through the display screen, wherein the camera candidate window comprises a plurality of camera labels, and the plurality of camera labels respectively correspond to one rear camera. The terminal can obtain a plurality of pre-configured rear cameras when detecting that the camera calling request is a rear camera calling request, and then generate a camera label according to the plurality of rear cameras. In one possible implementation, the camera candidate window may be a floating layer displayed on the interface of the application, the floating layer being a container including one or more objects disposed above the current user interface.
S503, receiving a selection instruction for the camera candidate window, and determining a target rear camera based on the camera label selected by the selection instruction.
The selection instruction is generated by the user through operation on the camera candidate window, the operation can be touch operation, key operation or other types of operation, and the terminal determines the target rear camera based on the camera tag selected by the selection instruction. The selection instruction is used for selecting one of a plurality of camera tags included in the camera candidate window, and the camera is arranged behind the target according to the selected camera.
And S504, calling a target rear camera to acquire an image.
The terminal calls the target rear camera to take a picture or carry out video call.
According to the content, when a camera calling request for calling the rear camera is detected, the candidate window of the camera comprising a plurality of camera labels is displayed through the display unit, the terminal determines the corresponding target rear camera according to the camera label selected by the selection instruction, the target rear camera is called to collect images, any one of the plurality of rear cameras can be called according to the actual requirement of a user, different application scenes are met, and the problems that the utilization rate of the rear camera is not high and the flexibility is not enough due to the fact that a main camera can only be called to carry out video call in the prior art are solved.
Referring to fig. 6, another schematic flow chart of a method for calling a camera according to an embodiment of the present application is provided, where the method includes the following steps:
s601, creating a monitoring process and running the monitoring process in a background.
The monitoring process can be a service process in an android operating system, the created service process is operated in a background, and the monitoring process is used for monitoring a camera calling request sent by each application program in the android operating system in a broadcasting mode.
S602, detecting a camera calling request sent by an application program through a monitoring process.
The camera call request can be sent by a third-party application installed in the terminal or sent by a camera application installed in the terminal by default, and the third-party application can be an instant messaging application, a payment application or a beauty application and the like. The terminal is provided with leading camera and rear camera, and the quantity of rear camera is a plurality of, and camera call order can be used for transferring leading camera or rear camera. The camera call request carries a camera flag bit, and the monitoring process can distinguish whether the camera call request is used for calling the front camera or the rear camera according to the value of the camera flag bit. For example: the method comprises the following steps that 1 bit is used for representing a camera marker bit, and when the bit is '1', a monitoring process determines that a camera calling request is used for calling a front camera; and when the bit is 0, the monitoring process determines that the camera calling request is used for calling the rear camera. The camera call request also carries an identifier of an application program which initiates the camera call request, the terminal prestores or preconfigures a mapping relation between the identifier of the application program and the type of the application program, and the terminal determines the type of the application program corresponding to the identifier of the application program according to the mapping relation.
S603, determining that the application program is a preset application program and the camera calling request is a post-camera calling request.
The terminal can judge that the camera calling request is used for calling the rear camera according to the camera flag bit carried in the camera calling request, and identify whether the application program identifier initiating the camera calling request is a preset application program according to the application program identifier carried in the camera calling request.
And S604, displaying the camera candidate window through the display screen.
The camera candidate window may be a floating layer on a user interface of the application program, and includes a plurality of camera tags, each of which corresponds to a rear camera. The terminal can generate a plurality of camera tags according to a plurality of pre-configured cameras. The camera tag also includes a camera type and a camera location,
and S605, detecting the selection operation of the user on the camera label in the camera candidate window.
The touch screen of the terminal detects the selection operation of a user for the candidate window of the camera, a selection instruction is sent to a processor of the terminal based on the selection operation, and after the processor receives the selection instruction, the camera tag is selected in the candidate window based on the selection instruction.
And S606, generating a selection instruction based on the selection operation, and determining a target rear camera based on the camera tag selected according to the selection instruction.
Each camera label in the camera candidate window corresponds to an identifier, the identifier represents the identity of the rear camera, and the target rear camera is determined according to the identifier corresponding to the camera identifier.
In a possible implementation manner, when the selection operation of the user is not received within a preset time length, a main camera in the multiple cameras is called.
The plurality of cameras comprise a main camera and an auxiliary camera, the number of the main cameras is one, a preset time length is pre-stored or pre-configured in the terminal, the preset time length can be determined according to actual requirements, and the method is not limited in the application. The terminal can take the moment of displaying the camera candidate window as the initial moment of timing, and counts whether the selection operation of the user is received within the preset time length.
In a possible implementation manner, when the selection operation occurring on the camera candidate window is not detected within a preset time length, the calling times of each rear camera are obtained, and the rear camera with the minimum calling times is used as the target rear camera.
The terminal counts the calling times of each rear camera in a preset time interval, acquires the calling times of each rear camera, increases the calling times by 1 time when the rear camera is started every time, and takes the rear camera with the largest calling times as a target rear camera so as to ensure the balance of the calling times of each rear camera and prolong the service life of the rear camera.
For example, the terminal is provided with 3 rear cameras, which are respectively: the terminal counts that the calling times of the rear camera 1 in the month are 20 times, the calling times of the rear camera 2 are 30 times, the calling times of the rear camera 3 are 40 times, the terminal determines that the calling times of the rear camera 1 are the minimum, and the rear camera 1 is used as a target rear camera.
In a possible implementation manner, when the selection operation occurring on the camera candidate window is not detected within a preset time length, the resource amount required by each rear camera is obtained, and the rear camera with the minimum resource amount is called as the target rear camera.
The resource quantity required by the rear camera represents the size of the resource required by the rear camera when the rear camera is used, the resource comprises hardware resources and software resources, when the selection operation occurring on the candidate window of the camera is detected within the preset time, the rear camera with the minimum required resource quantity is used as the target rear camera, the consumption of system resources is reduced when the rear camera is called, and the smoothness of system operation is improved.
For example, the preset time is 10s, the resource amount is the size of a memory, the terminal is provided with a rear camera 1, a rear camera 2 and a rear camera 3, when the terminal does not detect a selection operation on a candidate window of the camera within 10s, the memory required by the rear camera 1 is 20M, the memory required by the rear camera is 30M, the memory required by the rear camera 3 is 40M, the terminal determines that the memory required by the rear camera 1 is the minimum, and the terminal takes the rear camera 3 as a target rear camera.
And S607, calling a target camera to acquire an image.
Referring to fig. 7, which is a schematic diagram of a chat window of an instant messaging application, an icon of the instant messaging application is displayed on a desktop of a terminal, when the terminal detects a click operation on the icon, an instruction for opening the instant messaging application is executed, the terminal displays a user interface of the instant messaging application, the user interface includes a buddy list, the buddy list includes a plurality of buddy icons, when the terminal detects a click operation on the buddy list, the chat window shown in fig. 7 is opened, the chat window includes a message display area and a message input area, a toolbar is further included below the message input area, a plurality of buttons are arranged on the toolbar, and the plurality of buttons include a video call button 70, a voice call button, a photographing button, a picture viewing button, and the like. The video call button 70 is used to perform a video call function.
Referring to the schematic diagram of the camera candidate window shown in fig. 8, the terminal detects a click operation of the user on the video call button 70, the terminal displays the camera candidate window 80 in response to the click operation, and the camera candidate window 80 includes a camera tag 801, a camera tag 802, and a camera tag 803. The number of the rear cameras of the terminal is three, and the camera labels on the camera candidate windows and the rear cameras arranged on the terminal are in one-to-one mapping relation. For example: the rear camera at terminal includes: wide angle camera, long focus camera and short focus camera, camera label 801 corresponds wide angle camera, and camera label 802 corresponds long focus camera, and camera label 803 corresponds short focus camera. The camera candidate window 80 is located above the chat window of the instant messaging application, and the camera candidate window 80 may be a floating layer.
Referring to the schematic diagram of the video call window shown in fig. 9, when the touch screen detects a selection operation of a certain camera tag in the camera candidate window by a user, the touch screen generates a selection instruction according to the selection operation, the touch screen sends the selection instruction to a processor of the terminal for execution, the processor determines the camera tag selected by the user according to the selection instruction, then determines a target rear camera selected by the user according to a mapping relationship between the camera tag and a camera ID, the terminal calls the rear camera to perform a video call, and displays the video call window shown in fig. 9, where the video call window includes an opposite side video window 90, a principal video window 91, and an end call button 92. The opposite side video window 90 is used for displaying the video image of the opposite side, the main person video window is used for displaying the video image of the main person, and the call end button is used for starting and ending the video call.
According to the content, when a camera calling request for calling the rear camera is detected, the candidate window of the camera comprising a plurality of camera labels is displayed through the display unit, the terminal determines the corresponding target rear camera according to the camera label selected by the selection instruction, the target rear camera is called to collect images, any one of the plurality of rear cameras can be called according to the actual requirement of a user, different application scenes are met, and the problems that the utilization rate of the rear camera is not high and the flexibility is not enough due to the fact that a main camera can only be called to carry out video call in the prior art are solved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 10, a schematic structural diagram of a calling device of a camera according to an exemplary embodiment of the present application is shown, and hereinafter referred to as the calling device 10. The invoking device 10 can be implemented by software, hardware or a combination of the two as all or a part of the terminal equipment, and the invoking device 10 includes: a detection unit 1001, a display unit 1002, a determination unit 1003, and a calling unit 1004.
A detection unit 1001 configured to detect a camera call request; the camera calling request is used for calling a rear camera;
a display unit 1002, configured to display a camera candidate window; the camera candidate window comprises a plurality of camera tags, and each camera tag corresponds to a rear camera;
a determining unit 1003, configured to receive a selection instruction for the camera candidate window, and determine a target rear camera based on a camera tag selected by the selection instruction;
and the calling unit 1004 is used for calling the target rear camera to acquire an image.
In one or more possible embodiments, the determining unit 1003 is further configured to:
receiving a camera calling request from an application program;
and determining that the application program is a preset application program.
In one or more possible embodiments, the camera tag includes: camera name, camera position and camera performance parameters.
In one or more possible implementations, the invoking unit 1004 is further configured to:
the selection operation occurring on the camera candidate window is not detected within a preset time length, and a main camera in the multiple cameras is called; or
When the selection operation occurring on the camera candidate window is not detected within the preset time length, determining a main camera according to the type of an application program; or
When the selection operation occurring on the camera candidate window is not detected within the preset time length, acquiring the calling times of each rear camera, and taking the rear camera with the minimum calling times as a target rear camera; or
And when the selection operation on the candidate camera window is not detected within the preset time, acquiring the resource quantity required by each rear camera, and taking the rear camera with the minimum resource quantity as a target rear camera.
In one or more possible embodiments, the detection unit 1001 is configured to:
creating a monitoring process and running the monitoring process in the background;
and detecting a camera call request distributed by an application program through the monitoring process.
In one or more possible embodiments, the method further comprises: and the release unit is used for releasing the resources occupied by the target rear camera after the calling of the target rear camera is completed.
In one or more possible embodiments, the determining unit 1003 is further configured to:
detecting the selection operation of a user for the camera label in the camera candidate window;
generating a selection instruction based on the selection operation; the selection instruction carries an identifier of the camera tag;
and determining a corresponding target camera according to the identification of the camera label.
It should be noted that, when executing the method for calling a camera, the calling apparatus for a camera provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules, so as to complete all or part of the functions described above. In addition, the calling device of the camera and the calling method embodiment of the camera provided by the above embodiments belong to the same concept, and the details of the implementation process are referred to the method embodiments, which are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
When the scheme of the embodiment of the application is executed, when the calling device 10 detects a camera calling request for calling the rear camera, a camera candidate window comprising a plurality of camera labels is displayed through the display unit, the terminal determines the corresponding target rear camera according to the camera label selected by the selection instruction, the target rear camera is called to collect images, any one of the plurality of rear cameras can be called according to the actual requirement of a user, so that different application scenes are met, and the problems of low utilization rate and low flexibility of the rear camera caused by the fact that only a main camera can be called to carry out video call in the prior art are solved.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the above method steps, and a specific execution process may refer to a specific description of the embodiment shown in fig. 3, which is not described herein again.
The application also provides a terminal which comprises a plurality of rear cameras, a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
Referring to fig. 11, a schematic structural diagram of a terminal according to an embodiment of the present invention is shown, where the terminal may be used to implement the method for calling a camera in the foregoing embodiment. Specifically, the method comprises the following steps:
the memory 503 may be used to store software programs and modules, and the processor 500 executes various functional applications and data processing by operating the software programs and modules stored in the memory 503. The memory 503 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 503 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 503 may also include a memory controller to provide the processor 500 and the input unit 505 access to the memory 503.
The input unit 505 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 505 may comprise a touch sensitive surface 506 (e.g., a touch screen, a touch pad, or a touch frame). The touch-sensitive surface 506, also referred to as a touch screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 506 (e.g., operations by a user on or near the touch-sensitive surface 506 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 506 may comprise both touch sensing devices and touch controllers. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 500, and can receive and execute commands sent by the processor 500. Additionally, touch sensitive surface 506 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves.
The display unit 513 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 513 may include a Display panel 514, and optionally, the Display panel 514 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 506 may overlay the display panel 514, and when a touch operation is detected on or near the touch-sensitive surface 506, the touch operation is transmitted to the processor 500 to determine the type of touch event, and then the processor 500 provides a corresponding visual output on the display panel 514 according to the type of touch event. Although in FIG. 5, touch-sensitive surface 506 and display panel 514 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 506 may be integrated with display panel 514 to implement input and output functions.
The processor 500 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 503 and calling data stored in the memory 503, thereby performing overall monitoring of the terminal. Optionally, processor 500 may include one or more processing cores; the processor 500 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 500.
Specifically, in this embodiment, the display unit of the terminal is a touch screen display, the terminal further includes a memory and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors includes implementing the following steps:
detecting a camera calling request; the camera calling request is used for calling a rear camera;
displaying the camera candidate window through a display unit; the camera candidate window comprises a plurality of camera tags, and each camera tag is associated with a rear camera;
receiving a selection instruction for the camera candidate window through the input unit 505, and determining a target rear camera based on a camera tag selected by the selection instruction;
and calling the target rear camera to acquire an image.
In one or more possible implementations, the processor 501 is further configured to: receiving a camera call request from an application through the input unit 505;
and determining that the application program is a preset application program.
In one or more possible embodiments, the camera tag includes: camera name, camera position and camera performance parameters.
In one or more possible embodiments, the processor 501 is further configured to:
the selection operation occurring on the camera candidate window is not detected within a preset time length, and a main camera in the multiple cameras is called; or
When the selection operation occurring on the camera candidate window is not detected within the preset time length, determining a main camera according to the type of an application program; or
When the selection operation occurring on the camera candidate window is not detected within the preset time length, acquiring the calling times of each rear camera, and taking the rear camera with the minimum calling times as a target rear camera; or
And when the selection operation on the candidate camera window is not detected within the preset time, acquiring the resource quantity required by each rear camera, and taking the rear camera with the minimum resource quantity as a target rear camera.
In one or more possible embodiments, the processor 501 performs detecting a camera call request, including:
creating a monitoring process and running the monitoring process in the background;
and detecting a camera call request distributed by an application program through the monitoring process.
In one or more possible embodiments, the processor 501 is further configured to perform:
and after the calling of the target rear camera is finished, releasing the resources occupied by the target rear camera.
In one or more possible embodiments, the processor 501 receives a selection instruction for the camera candidate window through the input unit 505, and determining a target rear camera based on the camera tag selected by the selection instruction includes:
detecting the selection operation of a user for the camera label in the camera candidate window;
generating a selection instruction based on the selection operation; the selection instruction carries an identifier of the camera tag;
and determining a corresponding target camera according to the identification of the camera label.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
All functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for calling a camera is characterized by comprising the following steps:
detecting a camera calling request; the camera calling request is used for calling a rear camera;
displaying the camera candidate window through a display unit; the camera candidate window comprises a plurality of camera tags, and each camera tag is associated with a rear camera;
receiving a selection instruction aiming at the camera candidate window, and determining a target rear camera based on a camera label selected by the selection instruction;
and calling the target rear camera to acquire an image.
2. The calling method according to claim 1, wherein before detecting the camera call request, further comprising:
receiving a camera calling request from an application program;
and determining that the application program is a preset application program.
3. The calling method according to claim 1, wherein the camera tag comprises: camera name, camera position and camera performance parameters.
4. The calling method of claim 1, further comprising:
the selection operation occurring on the camera candidate window is not detected within a preset time length, and a main camera in the multiple cameras is called; or
When the selection operation occurring on the camera candidate window is not detected within the preset time length, determining a main camera according to the type of an application program; or
When the selection operation occurring on the camera candidate window is not detected within the preset time length, acquiring the calling times of each rear camera, and taking the rear camera with the minimum calling times as a target rear camera; or
And when the selection operation on the candidate camera window is not detected within the preset time, acquiring the resource quantity required by each rear camera, and taking the rear camera with the minimum resource quantity as a target rear camera.
5. The calling method of claim 1, wherein detecting a camera call request comprises:
creating a monitoring process and running the monitoring process in the background;
and detecting a camera call request distributed by an application program through the monitoring process.
6. The calling method of claim 1, further comprising:
and after the calling of the target rear camera is finished, releasing the resources occupied by the target rear camera.
7. The calling method according to claim 1, wherein the receiving a selection instruction for the camera candidate window, and the determining a target rear camera based on the camera tag selected by the selection instruction comprises:
detecting the selection operation of a user for the camera label in the camera candidate window;
generating a selection instruction based on the selection operation; the selection instruction carries an identifier of the camera tag;
and determining a corresponding target camera according to the identification of the camera label.
8. A calling device of a camera is characterized by comprising:
the detection unit is used for detecting a camera calling request; the camera calling request is used for calling a rear camera;
the display unit is used for displaying the camera candidate window; the camera candidate window comprises a plurality of camera tags, and each camera tag corresponds to a rear camera;
the determining unit is used for receiving a selection instruction aiming at the camera candidate window and determining a target rear camera based on the camera label selected by the selection instruction;
and the calling unit is used for calling the target rear camera to acquire images.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 7.
CN201911279322.3A 2019-12-13 2019-12-13 Camera calling method and device, storage medium and terminal Pending CN112995562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911279322.3A CN112995562A (en) 2019-12-13 2019-12-13 Camera calling method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911279322.3A CN112995562A (en) 2019-12-13 2019-12-13 Camera calling method and device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN112995562A true CN112995562A (en) 2021-06-18

Family

ID=76332257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911279322.3A Pending CN112995562A (en) 2019-12-13 2019-12-13 Camera calling method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112995562A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489898A (en) * 2021-06-29 2021-10-08 广州小鹏汽车科技有限公司 Camera calling method and device, vehicle and storage medium
CN113852763A (en) * 2021-09-30 2021-12-28 上海绚显科技有限公司 Audio and video processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261829A1 (en) * 2014-11-07 2016-09-08 SeeScan, Inc. Inspection camera devices and methods with selectively illuminated multisensor imaging
CN107071138A (en) * 2016-12-15 2017-08-18 北京佰人科技有限责任公司 The method and device of video calling
CN109639986A (en) * 2019-01-30 2019-04-16 努比亚技术有限公司 Camera image pickup method, storage medium and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261829A1 (en) * 2014-11-07 2016-09-08 SeeScan, Inc. Inspection camera devices and methods with selectively illuminated multisensor imaging
CN107071138A (en) * 2016-12-15 2017-08-18 北京佰人科技有限责任公司 The method and device of video calling
CN109639986A (en) * 2019-01-30 2019-04-16 努比亚技术有限公司 Camera image pickup method, storage medium and mobile terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489898A (en) * 2021-06-29 2021-10-08 广州小鹏汽车科技有限公司 Camera calling method and device, vehicle and storage medium
CN113489898B (en) * 2021-06-29 2023-10-31 广州小鹏汽车科技有限公司 Camera calling method, device, vehicle and storage medium
CN113852763A (en) * 2021-09-30 2021-12-28 上海绚显科技有限公司 Audio and video processing method and device, electronic equipment and storage medium
CN113852763B (en) * 2021-09-30 2023-12-12 上海绚显科技有限公司 Audio and video processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109766053B (en) User interface display method, device, terminal and storage medium
EP4130994A1 (en) Remote assistance method and apparatus, and storage medium and terminal
EP3454199B1 (en) Method for responding to touch operation and electronic device
WO2018223558A1 (en) Data processing method and electronic device
CN111225138A (en) Camera control method and device, storage medium and terminal
CN111124668B (en) Memory release method, memory release device, storage medium and terminal
EP3757739A1 (en) Method for display when exiting an application, and terminal
CN110413347A (en) The processing method of advertisement, device, storage medium and terminal in application program
WO2019047231A1 (en) Touch operation response method and device
CN113268212A (en) Screen projection method and device, storage medium and electronic equipment
WO2019047226A1 (en) Touch operation response method and device
CN108780400B (en) Data processing method and electronic equipment
CN112788583A (en) Equipment searching method and device, storage medium and electronic equipment
CN110702346B (en) Vibration testing method and device, storage medium and terminal
CN112995562A (en) Camera calling method and device, storage medium and terminal
CN111866372A (en) Self-photographing method, device, storage medium and terminal
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN111913614B (en) Multi-picture display control method and device, storage medium and display
CN111857480B (en) Icon alignment method and device, storage medium and electronic equipment
CN117555459A (en) Application group processing method and device, storage medium and electronic equipment
CN111949150B (en) Method and device for controlling peripheral switching, storage medium and electronic equipment
CN113419650A (en) Data moving method and device, storage medium and electronic equipment
CN113010078A (en) Touch method and device, storage medium and electronic equipment
CN111859999A (en) Message translation method, device, storage medium and electronic equipment
CN113825022A (en) Play control state detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618

RJ01 Rejection of invention patent application after publication