CN116489504A - Control method of camera module, camera module and electronic equipment - Google Patents

Control method of camera module, camera module and electronic equipment Download PDF

Info

Publication number
CN116489504A
CN116489504A CN202310484162.6A CN202310484162A CN116489504A CN 116489504 A CN116489504 A CN 116489504A CN 202310484162 A CN202310484162 A CN 202310484162A CN 116489504 A CN116489504 A CN 116489504A
Authority
CN
China
Prior art keywords
image
camera
target
view finding
acquisition instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310484162.6A
Other languages
Chinese (zh)
Inventor
李晓龙
丁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202310484162.6A priority Critical patent/CN116489504A/en
Publication of CN116489504A publication Critical patent/CN116489504A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • H04N23/662Transmitting camera control signals through networks, e.g. control via the Internet by using master/slave camera arrangements for affecting the control of camera image capture, e.g. placing the camera in a desirable condition to capture a desired image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The application discloses a control method of camera module, camera module and electronic equipment, wherein, the control method of camera module includes: responding to a first image acquisition instruction, and controlling a first camera to acquire a first image; controlling at least one second camera to acquire at least one second image; performing corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area; the second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.

Description

Control method of camera module, camera module and electronic equipment
Technical Field
The application relates to the technical field of image processing, in particular to a control method of a camera module, the camera module and electronic equipment.
Background
In some real-time image display or video display scenes, image contents of some large scenes are often displayed, and because the angle of view of the image contents is relatively large, the image contents can cause a shooting object in the image to be displayed very small, so that a viewer cannot clearly see the contents which the viewer wants to pay attention to.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
the control method of the camera module comprises a first camera and at least one second camera, and the method comprises the following steps:
responding to a first image acquisition instruction, and controlling a first camera to acquire a first image;
controlling at least one second camera to acquire at least one second image;
performing corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area;
the second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
Optionally, controlling the at least one second camera to acquire at least one second image includes:
acquiring position information and/or characteristic information of at least one appointed view finding object based on the first image;
and the position information and/or the characteristic information is/are given to at least one corresponding second camera so as to control the at least one second camera to acquire at least one second image aiming at the specified view finding object based on the position information and/or the characteristic information.
Optionally, controlling the at least one second camera to acquire at least one second image includes:
in the process of acquiring the first image, controlling at least one second camera to acquire at least one second image aiming at a specified view finding object in response to acquiring a second image acquisition instruction aiming at the specified view finding object;
wherein obtaining a second image acquisition instruction comprises at least one of:
determining to obtain the second image acquisition instruction in response to a selection operation of obtaining a preview interface acting on the first image;
determining to obtain the second image acquisition instruction in response to obtaining a voice input for at least one viewing object in the first image;
determining to obtain the second image acquisition instruction in response to obtaining a sight line control operation of a target user in a view finding range of the first camera;
determining to obtain the second image acquisition instruction in response to gesture control operation of a target user in a view finding range of the first camera;
and performing sound source positioning processing on sound data in the view finding range of the first camera, and acquiring the second image acquisition instruction based on a sound source positioning processing result.
Optionally, controlling the at least one second camera to acquire at least one second image includes:
and if the first image acquisition instruction comprises the description information aiming at least one appointed view finding object, synchronously controlling at least one second camera to acquire at least one second image aiming at the appointed view finding object based on the description information in the first image acquisition instruction.
Optionally, the obtaining the first image acquisition instruction includes at least one of:
responding to the triggering operation of the target image acquisition application, and determining to acquire the first image acquisition instruction;
generating the first image acquisition instruction in response to determining that target interaction data occurs between electronic equipment and target equipment, wherein the electronic equipment is equipment loaded with the camera module;
the camera module enters a target environment area, and the first image acquisition instruction is determined to be obtained;
determining to obtain the first image acquisition instruction in response to the camera module being switched from a first motion state to a second motion state, wherein the motion variation of the camera module in the first motion state is larger than that in the second motion state;
And responding to the camera module to switch from the first pose state to the second pose state, and determining to obtain the first image acquisition instruction.
Optionally, controlling the at least one second camera to acquire at least one second image includes at least one of:
if at least two specified view finding objects exist in the first image, controlling at least two second cameras to acquire at least two second images aiming at the two specified view finding objects;
if a specified view finding object exists in the first image, controlling one or at least two second cameras to acquire at least one second image aiming at the specified view finding object;
if an image acquisition instruction aiming at least two appointed view finding objects is obtained, controlling at least two second cameras to acquire at least two second images aiming at the two appointed view finding objects;
if an image acquisition instruction aiming at a single appointed view finding object is obtained, controlling one or at least two second cameras to obtain at least one second image aiming at the appointed view finding object;
and obtaining configuration information of the second cameras, and controlling at least one second camera to obtain at least one second image aiming at the appointed view finding object based on the configuration information and the quantity of the appointed view finding objects.
Optionally, the performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area includes at least one of the following:
performing stitching processing on the first image and the at least one second image to obtain a third image which can be displayed and output to a target display area;
superposing the first image and the at least one second image to obtain a fourth image which can be displayed and output to a target display area;
performing embedded processing on the at least one second image and the first image to obtain a fifth image which can be displayed and output to a target display area;
processing the at least one second image into a control capable of being triggered to be displayed by a target operation acting on the first image, and outputting the first image comprising the control as a target image displayed to a target display area;
and processing the first image into a control capable of being triggered to be displayed by target operation on any second image, and outputting the second image comprising the control as a target image displayed to a target display area.
Optionally, the corresponding processing is performed on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area, which includes at least one of the following:
Obtaining configuration information of a target display area, and correspondingly processing the first image and the at least one second image based on the configuration information to obtain the target image;
receiving object information of a target image is obtained, and the first image and the at least one second image are correspondingly processed based on the receiving object information to obtain the target image;
and obtaining processing capability information of electronic equipment loaded with the camera module and/or receiving equipment used for receiving the target image, and correspondingly processing the first image and the at least one second image based on the processing capability information to obtain the target image.
The application also discloses camera module, including first camera and at least a second camera, still include:
the first control module is used for controlling the first camera to acquire a first image in response to the first image acquisition instruction;
the second control module is used for controlling at least one second camera to acquire at least one second image;
the image processing module is used for carrying out corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area;
The second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
Further, the application also discloses electronic equipment, including the camera module, this camera module includes:
the device comprises a first camera and at least one second camera;
a processor;
a memory for storing executable program instructions of the processor;
wherein the executable program instructions comprise: responding to a first image acquisition instruction, and controlling a first camera to acquire a first image; controlling at least one second camera to acquire at least one second image; performing corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area; the second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
As can be seen from the above technical solution, the embodiments of the present application disclose a control method of a camera module, a camera module and an electronic device, where the control method of the camera module includes: responding to a first image acquisition instruction, and controlling a first camera to acquire a first image; controlling at least one second camera to acquire at least one second image; performing corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area; the second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image. According to the scheme, when the image acquisition is needed, the whole scene image and the close-up image of the concerned object can be respectively obtained by adopting different cameras, and the target image which can be displayed on the display screen is obtained based on the obtained image processing of different view finding ranges.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flowchart of a control method of a camera module disclosed in an embodiment of the present application;
FIG. 2 is an exemplary diagram of a first image and a second image disclosed in an embodiment of the present application;
FIG. 3 is a flowchart of acquiring a second image according to an embodiment of the present disclosure;
FIG. 4 is a diagram of an example second different image of a designated viewing object disclosed in an embodiment of the present application;
fig. 5 is a schematic diagram of a display effect of a target image according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a display effect of another target image according to an embodiment of the present disclosure;
fig. 7 is a schematic view of a display effect of another target image according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a control structure of a camera module according to an embodiment of the present disclosure;
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The embodiment of the application can be applied to electronic equipment, the product form of the electronic equipment is not limited, and the electronic equipment can comprise but is not limited to a smart phone, a tablet personal computer, a wearable device, a Personal Computer (PC), a netbook and the like, and can be selected according to application requirements.
Fig. 1 is a flowchart of a control method of a camera module disclosed in an embodiment of the present application. The camera module capable of applying the control method comprises a first camera and at least one second camera, and the performance parameters of the cameras in the camera module can be the same, different or partially the same, which is not limited in the application. Referring to fig. 1, the control method of the camera module may include:
Step 101: and controlling the first camera to acquire the first image in response to the first image acquisition instruction.
The triggering of the first image acquisition instruction has various implementations, and can be, but not limited to, the following triggering forms: the user triggers the photographing application, for example, the user clicks the photographing application to trigger the photographing application to run; triggering interaction data between equipment to which the camera module belongs and other equipment, and establishing video call by the other equipment; determining triggering of entering into a photographing area based on positioning information, such as photographing points, meeting areas and the like of entering scenic spots by equipment in a camera module; triggering a preset event, wherein if a user presets to start to collect images at a certain time; motion data information triggers, such as entering a particular modality from a particular motion modality; the gesture change triggers, such as the camera facing obliquely downwards.
After the camera module obtains the first image acquisition instruction, the first camera is controlled to acquire a first image, wherein the first image can be understood as an image containing an overall scene, and can comprise a plurality of shooting objects, wherein the shooting objects can be movable objects such as people, animals and the like, and can also be objects with fixed positions and forms, such as display objects.
The first camera can be a fixed camera in the camera module, or can be a variable camera, for example, a proper camera can be selected from the camera module to serve as the first camera based on an actual application scene, and other cameras are determined to be the second camera. For example, in an application scene with a large view finding range, a camera with a smaller focal length is determined as a first camera; in an application scene with a smaller view finding range, a camera with a larger focal length is determined as a first camera. In this embodiment of the present application, the first camera is configured to collect an overall image of a scene.
Step 102: and controlling at least one second camera to acquire at least one second image.
In the camera module, the number of the second cameras is not fixed, and when image acquisition is carried out, the number of the second cameras which are needed to be used specifically can be the same as or different from the number of the second cameras. For example, there are 3 second cameras, and only one object to be focused in the image acquisition scene is needed, then only one second camera can be used to acquire the second image; if two objects to be focused are in the image acquisition scene, two cameras are required to acquire a second image respectively. It should be noted that, in the embodiment of the present application, the second image acquired by the second camera is an image for a specific object of interest, which is a close-up image for the object of interest, so that the view-finding objects in the second images acquired by different second cameras are different, or at least partially different. Therefore, the view finding range of the second camera when the second image is acquired is smaller than that of the first camera when the first image is acquired, and each second image comprises at least one view finding object existing in the first image. Fig. 2 is an exemplary diagram of a first image and a second image disclosed in an embodiment of the present application, and the relevant disclosure herein may be understood in conjunction with fig. 2.
In addition, it should be noted that the execution sequence of the first camera to acquire the first image and the second camera to acquire the second image is not limited. In practical applications, based on different performance parameters of each camera or different execution times of related processing algorithms, there may be a small time difference between the time when the first camera and the second camera acquire images, but the execution of the image acquisition by the first camera and the second camera is not represented to have a logical association. The two may be collected simultaneously or may have a sequential relationship, and the corresponding implementation will be specifically described in the following embodiments, which will not be described in any more detail herein.
Step 103: and carrying out corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area.
The image content included in the target image may include only the first image or the second image, or may include both the first image and the second image. The processing of the first image and the at least one second image is performed in real time, and the processing can be performed based on a default mode to obtain a target image, or the processing of the first image and the at least one image into the target image meeting the user requirement can be performed based on related instructions triggered by the user. That is, the presentation form and content of the target image are not fixed, and image presentation of different contents and forms can be performed based on the user's needs.
According to the control method of the camera module, when image acquisition is required, different cameras can be adopted to respectively obtain the whole scene image and the close-up image of the object of interest, and the target image which can be displayed on the display screen is obtained based on the obtained image processing of different view ranges.
Fig. 3 is a flowchart of acquiring a second image according to an embodiment of the present application. As described in connection with fig. 3, in the foregoing embodiment, the controlling at least one second camera to obtain at least one second image may include:
step 301: position information and/or feature information of at least one specified viewing object is acquired based on the first image.
As already described above, the first image may be an image of the entire scene, which corresponds to a relatively large viewing range and may include a large number of objects. Not all of the objects are of interest to the user, who may be only focusing on one or both of them, and therefore a given object needs to be determined from among the plurality of objects in the first image, as the object in the close-up image that the second camera needs to capture separately.
The location information may be location coordinates of a specified viewing object, and the feature information may be a sign of the specified viewing object or a description feature for identifying a specific object, and a description feature for identifying a specific object, such as a contour, clothing, a hairstyle, a motion gesture, and the like. The obtained position information needs to be transmitted to the second camera so that the second camera can identify and determine the object, i.e. the specified viewing object, of which the close-up image is taken, based on the received information.
Step 302: and the position information and/or the characteristic information is/are given to at least one corresponding second camera so as to control the at least one second camera to acquire at least one second image aiming at the specified view finding object based on the position information and/or the characteristic information.
The operation of transmitting the position information and/or the feature information to the at least one second camera may be controlled by a controller or a processor of the camera module, or may be performed by an image signal processor of the first camera. After receiving the position information and/or the characteristic information, the second camera identifies a specified view finding object from the view finding range based on the information, and performs image acquisition operation to obtain an image of the specified view finding object.
For example, the second camera receives the position coordinate of the specified view finding object, where the position coordinate is the position coordinate of the specified view finding object in the first image phase, and the second camera may determine a position area of the specified view finding object in the view finding range based on the position coordinate and in combination with the position relationship between the second camera and the first camera, and perform image acquisition for the position area.
For another example, the second camera receives the clothes and the hairstyle of the appointed view finding object, then the object with the corresponding clothes and hairstyle can be identified in the view finding range by the image identification technology, and the image of the object is acquired.
Because the second camera acquires the close-up image aiming at the appointed view finding object, the area where the appointed view finding object is located is enlarged and shot, so that the acquired second image may be not clear enough. In this case, in other implementations, optimization processing may be performed on the second image obtained by the second camera, such as image interpolation processing, image filtering processing, and so on, to increase the sharpness of the second image.
In this embodiment, a first image is obtained by the first camera, then a specified view finding object is determined based on the first image, and the second camera is controlled to capture a close-up image of the specified view finding object, so that a detailed close-up image of the specified view finding object can be obtained when an overall scene image is obtained, and basic support is provided for an image in which a user can view the overall scene image and a focus point at the same time.
In another implementation, controlling the at least one second camera to obtain the at least one second image may include: in the process of acquiring the first image, at least one second camera is controlled to acquire at least one second image aiming at a specified view finding object in response to acquiring a second image acquisition instruction aiming at the specified view finding object.
The process of acquiring the first image may be a process of acquiring a preview image, that is, in an image preview stage, a user may select and determine a specified view object from the preview image, and trigger the second camera to perform image acquisition on the specified view object. The user selects the operation of determining the specified view finding object, namely, the operation of triggering the generation of a second image acquisition instruction which is opposite to the specified view finding object.
Obtaining the second image acquisition instruction has a number of implementations, several of which are described below.
In one implementation, the obtaining the second image acquisition instruction may be in response to a selection operation to obtain a preview interface acting on the first image, determining to obtain the second image acquisition instruction. For example, the preview interface includes a plurality of people, and when the user clicks one of the people objects in the preview interface with a finger, the generation of a second image acquisition instruction is triggered, where the acquisition instruction instructs the second camera to determine the person object (i.e. the specified view object) selected by the user clicking as the object to be photographed.
In one example, obtaining the second image acquisition instruction may be in response to obtaining a voice input for at least one viewing object in the first image, determining to obtain the second image acquisition instruction. For example, after the preview interface is presented to the user, the user may send a voice command to the device through voice, such as "child wearing red clothes", and trigger to generate the second image acquisition command, where the acquisition command instructs the second camera to determine that the child wearing red clothes in its view range is an object to be photographed.
In one example, the obtaining the second image acquisition instruction may be determining to obtain the second image acquisition instruction in response to obtaining a line-of-sight control operation of the target user with respect to the first camera within the view range. For example, on a mobile phone display screen, after a preview interface is presented to a user, a front camera can collect the face of the user, a sight direction of the user is determined through a corresponding algorithm, an object watched by the user is determined based on the sight direction and a presented preview image, and then a second image collection instruction is triggered and generated, wherein the collection instruction indicates the second camera to determine the object watched by the user as the object to be shot.
In one example, the obtaining the second image acquisition instruction may be determining to obtain the second image acquisition instruction in response to obtaining a gesture control operation of a target user pointing within a view range of the first camera. For example, 5 persons are included in the preview interface, each person has a different gesture, and the user may trigger generation of a second image acquisition instruction by swinging out the same gesture as one of the persons in front of the front camera, where the acquisition instruction may instruct the second camera to determine the same person as the gesture of the user as the object to be photographed.
In one example, the second image acquisition instruction may be obtained by performing a sound source localization process on sound data within a view range from the first camera, and obtaining the second image acquisition instruction based on a result of the sound source localization process. For example, in a conference scenario, the view range of the first camera includes multiple people, but one of the people is hosting the conference or speaking, so that it can be determined by the sound source localization technology, which person in the view range is speaking, and after the determination, the generation of the second image acquisition instruction can be triggered, where the acquisition instruction instructs the second camera to determine the determined person speaking as the object to be photographed.
The above describes a plurality of implementations of the second camera determining the specified view object and obtaining the second image, but this does not constitute a fixed limitation for the second camera to obtain the second image, and the implementation of the second image obtained in multiple types and multiple ways makes the application range of the scheme of the application wider, and is also convenient for those skilled in the art to implement the scheme of the application better.
In another implementation, controlling the at least one second camera to obtain the at least one second image may include: and if the first image acquisition instruction comprises the description information aiming at least one appointed view finding object, synchronously controlling at least one second camera to acquire at least one second image aiming at the appointed view finding object based on the description information in the first image acquisition instruction.
For example, after the user opens the photographing application, a voice command "photograph a person wearing a hat" is sent, and then the controller of the camera module synchronously controls the first camera and the second camera to collect images. The first camera acquires an overall scene image containing a 'person wearing a hat', and the second camera independently acquires an image of the 'person wearing a hat'. If one person wearing the hat is in the view range, a second camera is controlled to acquire images of the person wearing the hat, and if two persons wearing the hat are in the view range, the two second cameras are controlled to acquire images of the person wearing the hat respectively.
Of course, the first image capturing instruction includes description information for at least one specified view finding object, which is not limited to the form or content of the voice instruction, but may also be other description information, which is not limited in this application.
The realization introduces the specific content of the first camera and the second camera for synchronously controlling the first camera and the second camera to acquire the images, the synchronous control of the first camera and the second camera enables the acquisition of the first image and the second image to be carried out simultaneously, the acquisition of the second image is not required to be carried out after the acquisition of the first image or the preview image, the target image can be timely processed and obtained and presented to the user, the waiting feeling of the user is reduced, and the user experience is improved.
In the foregoing embodiments, there are also various implementations of obtaining the first image capturing instruction, and several of them will be described below.
In one example, obtaining the first image acquisition instruction may include: and responding to the triggering operation of the target image acquisition application, and determining to acquire the first image acquisition instruction. For example, the user clicks on a camera application or other photographing application, such as an image acquisition application with a beautifying and editing function, on a desktop, and the corresponding application starts to run, so as to determine to acquire the first image acquisition instruction.
In one example, obtaining the first image acquisition instruction may include: and generating the first image acquisition instruction in response to determining that target interaction data occurs between electronic equipment and target equipment, wherein the electronic equipment is equipment loaded with the camera module. For example, the electronic device to which the camera module belongs is connected with the large-screen device, the large-screen device sends a verification code (target interaction data) to the electronic device, and after the electronic device determines that the verification code is correct, the electronic device triggers to generate a first image acquisition instruction. Alternatively, the interaction data may be communication interaction data between the target user and a specific session object, such as a session message, and if the session object requires the target user to display an environmental image of the environment, the first image acquisition instruction may be triggered to be generated.
In one example, obtaining the first image acquisition instruction may include: and responding to the camera module entering a target environment area, and determining to obtain the first image acquisition instruction. For example, in some scenic spots, a photographing area is set at some well-known scenic spots, and when an electronic device to which the camera module belongs enters the photographing area, the camera module is automatically started to trigger generation of a first image acquisition instruction.
In one example, obtaining the first image acquisition instruction may include: and responding to the camera module to switch from a first motion state to a second motion state, and determining to obtain the first image acquisition instruction, wherein the motion variation of the camera module in the first motion state is larger than that in the second motion state. For example, the user carries the mobile phone in the running exercise process in the morning every day, the mobile phone moves fast along with the user, after the running of the user is finished, the mobile phone is changed into a walk state, the mobile phone photographing record is lifted, and at the moment, the mobile phone also moves slowly along with the user, and in this case, the first image acquisition instruction can be triggered to be generated.
In one example, obtaining the first image acquisition instruction may include: and responding to the camera module to switch from the first pose state to the second pose state, and determining to obtain the first image acquisition instruction. For example, in a normal state, the mobile phone is with the screen side facing upwards and the back side facing downwards, and when the user lifts the mobile phone so that the mobile phone screen is in a vertical posture or a obliquely downward posture, the generation of the first image acquisition instruction may be triggered.
The implementation details several implementations of obtaining the first image acquisition instruction, but this does not constitute a fixed limitation for obtaining the first image acquisition instruction, and the implementation of obtaining the first image acquisition instruction in multiple types and modes makes the application range of the application scheme wider, and is also convenient for those skilled in the art to implement the application scheme better.
In one implementation, controlling the at least one second camera to obtain the at least one second image may include: and if at least two specified view finding objects exist in the first image, controlling at least two second cameras to acquire at least two second images aiming at the two specified view finding objects. The designated view finding object in the first image may be determined based on image recognition or based on a selection operation performed by the user on the preview interface.
For example, a plurality of pictures are stored on a mobile phone to which the camera module belongs, the pictures comprise a plurality of characters, and when a photographing application is used, characters existing in a local album and appearing in a first image can be automatically determined to be designated view finding objects based on an image recognition technology. If two children in the local album appear in the first image, namely a child A and a child B, and the child A and the child B are determined to exist in the first image comprising more than ten people through identification judgment, the child A and the child B are determined to be designated view finding objects, and two second cameras are controlled to respectively obtain the image of the child A and the image of the child B.
In one implementation, controlling the at least one second camera to obtain the at least one second image may include: and if a specified view finding object exists in the first image, controlling one or at least two second cameras to acquire at least one second image aiming at the specified view finding object.
For example, in the stage last screen-dropping scene, only one presenter is determined as the appointed view finding object, at this time, the capturing of a close-up image of the presenter by using one second camera can be controlled, the two cameras can be controlled to respectively obtain the close-up images of the presenter, and finally, one better-effect second image can be selected from the two close-up images as the final second image, or the two second images can be simultaneously reserved.
In one implementation, controlling the at least one second camera to obtain the at least one second image may include: and if the image acquisition instructions for at least two specified view finding objects are obtained, controlling at least two second cameras to acquire at least two second images for the two specified view finding objects.
For example, in a meeting participant group photo and concept scene, a user clicks and selects two meeting core people in a preview interface, two appointed view finding objects are determined, further the two cameras are controlled to collect close-up images of the appointed view finding objects, and different second cameras adopt images of different appointed view finding objects.
In one implementation, controlling the at least one second camera to obtain the at least one second image may include: and if an image acquisition instruction aiming at a single appointed view finding object is obtained, controlling one or at least two second cameras to acquire at least one second image aiming at the appointed view finding object.
For example, the first image only includes one person, and the rest areas are all background areas, so that the first camera can be controlled to obtain an overall panoramic image, the panoramic image includes the whole body area of the person, and one or at least two second cameras are controlled to obtain a close-up image of the person; in the implementation of obtaining the close-up image by adopting the two second cameras, one second camera can be controlled to obtain the upper half image of the person, and the other second camera can be controlled to obtain the head image of the person. Fig. 4 is a diagram of a second, different image illustration for a given viewing object, which can be understood in conjunction with fig. 4.
In one implementation, controlling the at least one second camera to obtain the at least one second image may include: and obtaining configuration information of the second cameras, and controlling at least one second camera to obtain at least one second image aiming at the appointed view finding object based on the configuration information and the quantity of the appointed view finding objects. The configuration information may include, but is not limited to, a hardware configuration of the camera, such as a viewfinder range, a focal length, a resolution, a depth of field, an imaging parameter, and the like.
For example, in a scene of a multi-person group photo, the first image may include 3 specified objects in the middle, and the 3 specified objects are adjacent, so that an area including the 3 specified objects may be taken as a close-up area to be taken separately, and a second camera may be directly controlled to take a picture of the close-up area; in an implementation, the different second cameras have different focal segments, so that a second camera suitable for shooting the close-up region can be selected from a plurality of second cameras based on the size of the close-up region, and a second image of the close-up region can be obtained based on the selected second camera.
The above describes several implementations for controlling at least one second camera to obtain at least one second image, which is convenient for a person skilled in the art to better understand and apply the technical scheme of the present application in real time.
In the foregoing embodiment, the performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and performing stitching processing on the first image and the at least one second image to obtain a third image which can be displayed and output to a target display area.
For example, after the first image and the at least one second image are obtained, the first image and the second image may be directly stitched, where the stitching mode may be left-right stitching, or up-down stitching, or stitching based on the image type and the region. Fig. 5 is an exemplary diagram of left-right stitching of a first image and a second image, fig. 6 is an exemplary diagram of split-region stitching of a first image and two second images, and the display effect of a third image can be understood in conjunction with fig. 5 and 6.
In another implementation, performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and superposing the first image and the at least one second image to obtain a fourth image which can be displayed and output to a target display area.
For example, the display area entirely displays the first image, and controls the second image to cover part of the first image to be displayed, the second image having a smaller size than the first image, the display effect of which is shown in fig. 7. Of course, according to the requirement of the user, the whole display area can be controlled to display the second image, and the first image is controlled to be reduced and displayed in a covering manner on the second image. It is to be noted that, in the implementation of the superimposed display of the two types of images, the position of the upper small-size image may be moved based on the triggering operation by the user, such as the second image being originally located at the upper right corner display of the first image, and the second image is moved to the lower right corner display of the first image through the touch drag operation by the user.
In another implementation, performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and performing embedded processing on the at least one second image and the first image to obtain a fifth image which can be displayed and output to a target display area.
For example, the second image with small size is embedded in the fixed position of the first image, so as to obtain a fifth image, and the overall display effect of the fifth image is as shown in fig. 7, but unlike the previous implementation, the position of the image subjected to the embedded processing in the application cannot be changed along with the touch operation of the user, and once the embedded processing is completed, the relative position of the first image and the second image cannot be changed.
In another implementation, performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and processing the at least one second image into a control capable of being triggered to be displayed by target operation acting on the first image, and outputting the first image comprising the control as a target image displayed to a target display area.
For example, the target image is the first image, but when the user performs a specific operation on the first image, for example, when the two contacts move in opposite directions (corresponding to an operation of magnifying the display screen on the touch screen), the second image that is not displayed originally may be triggered to be displayed in a specific area, and the specific area may be a partial area in the display area of the first image, which may be displayed over the first image, or may also be divided into two parts, where one part displays the first image and the other part displays the second image.
It should be noted that, the control may be a thumbnail image of the second image, or may be an integral image of the second image; alternatively, the control may be just a virtual button that when selected triggers will further render the second image.
In another implementation, performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and processing the first image into a control capable of being triggered to be displayed by target operation on any second image, and outputting the second image comprising the control as a target image displayed to a target display area.
Similar to the foregoing implementation, for example, when the user performs a specific operation on the first image, for example, when the two contacts move in opposite directions (corresponding to an operation of reducing the display screen on the touch screen), the first image that is not displayed originally may be triggered to be displayed in a specific area, which may be a partial area in the second image display area, and may be overlaid on the second image to be displayed, or may also divide the area in which the second image is displayed originally into two parts, where one part displays the first image and the other part displays the second image.
Likewise, the control may be a thumbnail image of the first image, or may be an integral image of the first image; alternatively, the control may be just a virtual button that when selected triggers will further render the first image.
The above details the realization of several presentation effects of processing the target image, which can meet different requirements of different users for the display content; the user can choose to watch the whole scene graph or the close-up image only, and can also choose to watch the whole scene graph and the close-up image simultaneously, and the process of operation control can also increase interestingness, so that good user experience is brought to the user.
In other implementations, the performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and obtaining configuration information of a target display area, and correspondingly processing the first image and the at least one second image based on the configuration information to obtain the target image.
The configuration information may be, but is not limited to, the number, size, resolution, positional relationship, etc. of the display screens. For example, when there is only one display screen, the target image may display only the first image or the second image; when there are at least two display screens, the target image may include both the first image and the second image, and the first image and the second image may be displayed on different display screens. For another example, when the size of the display screen is small, the target image may display only the first image or the second image; when the size of the display screen is relatively large, the first image and the second image may be stitched into a target image.
In another implementation, performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: receiving object information of a target image is obtained, and corresponding processing is carried out on the first image and the at least one second image based on the receiving object information, so that the target image is obtained.
The receiving object information may indicate whether the receiving object is a group or a person, for example, in a certain chat group, the target image needs to include at least a first image, which may be the first image only, or may include both the first image and the second image; in the single conversation window, the target image at least needs to include the second image, which may be only the second image, or may include both the first image and the second image.
In another implementation, performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area may include: and obtaining processing capability information of electronic equipment loaded with the camera module and/or receiving equipment used for receiving the target image, and correspondingly processing the first image and the at least one second image based on the processing capability information to obtain the target image.
For example, if the processing capacities of the electronic device and the receiving device, on which the camera module is mounted, are relatively high, the first image and the second image may be processed, so that the target image includes the first image and the second image at the same time; if the processing capacity of the electronic equipment loaded with the camera module is weaker, the first image and the second image can be simply spliced to obtain a target image; if the processing capacity of the electronic device loaded with the camera module is higher, the first image and the second image can be processed in a more loaded manner, such as embedding the first image into the second image, adding a control corresponding to the second image on the first image, and the like.
The above details are about several implementations of obtaining the target image based on the configuration or scene processing of the target image sender and receiver, and the method for adaptively determining the appropriate processing of the target image based on the difference of the actual devices and the difference of the scenes, so that the user can be served better by more intelligentizing the method on the premise of ensuring the user experience.
For the foregoing method embodiments, for simplicity of explanation, the methodologies are shown as a series of acts, but one of ordinary skill in the art will appreciate that the present application is not limited by the order of acts described, as some acts may, in accordance with the present application, occur in other orders or concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
The method is described in detail in the embodiments disclosed in the application, and the method can be implemented by using various devices, so that the application also discloses a device, and a specific embodiment is given in the following detailed description.
Fig. 8 is a schematic diagram of a control structure of a camera module according to an embodiment of the present application. The camera module comprises a first camera and at least one second camera. Referring to fig. 8, the control device 80 of the camera module may include:
the first control module 801 is configured to control the first camera to acquire a first image in response to acquiring the first image acquisition instruction.
The second control module 802 is configured to control at least one second camera to acquire at least one second image.
The first control module and the second control module may be the same control module.
The image processing module 803 performs corresponding processing on the first image and the at least one second image, so as to obtain a target image capable of being displayed and output to a target display area.
The second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
When image acquisition is required, the control device of the camera module can respectively obtain the whole scene image and the close-up image of the object of interest by adopting different cameras, and obtain the target image which can be presented on the display screen based on the obtained image processing of different view finding ranges.
Any one of the camera modules in the above embodiments includes a processor and a memory, where the first control module, the second control module, the image processing module, and the like in the above embodiments are stored as program modules in the memory, and the processor executes the program modules stored in the memory to implement corresponding functions; in other embodiments, the first control module may correspond to an image signal processor of the first camera, the second control module may correspond to an image signal processor of the second camera, and the image processing module may correspond to an image signal processor of the camera module.
The processor comprises a kernel, and the kernel fetches the corresponding program module from the memory. The kernel can be provided with one or more kernels, and the processing of the return visit data is realized by adjusting kernel parameters.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
In an exemplary embodiment, a computer readable storage medium is provided, which can be directly loaded into an internal memory of a computer, and contains software codes, and the computer program can implement the steps shown in any embodiment of the control method of the camera module after being loaded and executed by the computer.
In an exemplary embodiment, a computer program product is also provided, which can be directly loaded into an internal memory of a computer, and contains software codes, and the computer program can implement the steps shown in any embodiment of the control method of the camera module after being loaded and executed by the computer.
Further, the embodiment of the application provides electronic equipment. Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 9, the electronic device includes a camera module, where the camera module includes: a first camera 901 and at least one second camera 902; also included is at least one processor 903, at least one memory 904 coupled to the processor, and a bus 905; the processor and the memory complete communication with each other through a bus; the processor is used for calling the program instructions in the memory to execute the control method of the camera module.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The control method of the camera module comprises a first camera and at least one second camera, and the method comprises the following steps:
responding to a first image acquisition instruction, and controlling a first camera to acquire a first image;
Controlling at least one second camera to acquire at least one second image;
performing corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area;
the second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
2. The method of claim 1, wherein controlling at least one second camera to acquire at least one second image comprises:
acquiring position information and/or characteristic information of at least one appointed view finding object based on the first image;
and the position information and/or the characteristic information is/are given to at least one corresponding second camera so as to control the at least one second camera to acquire at least one second image aiming at the specified view finding object based on the position information and/or the characteristic information.
3. The method of claim 1, wherein controlling at least one second camera to acquire at least one second image comprises:
in the process of acquiring the first image, controlling at least one second camera to acquire at least one second image aiming at a specified view finding object in response to acquiring a second image acquisition instruction aiming at the specified view finding object;
Wherein obtaining a second image acquisition instruction comprises at least one of:
determining to obtain the second image acquisition instruction in response to a selection operation of obtaining a preview interface acting on the first image;
determining to obtain the second image acquisition instruction in response to obtaining a voice input for at least one viewing object in the first image;
determining to obtain the second image acquisition instruction in response to obtaining a sight line control operation of a target user in a view finding range of the first camera;
determining to obtain the second image acquisition instruction in response to gesture control operation of a target user in a view finding range of the first camera;
and performing sound source positioning processing on sound data in the view finding range of the first camera, and acquiring the second image acquisition instruction based on a sound source positioning processing result.
4. The method of claim 1, wherein controlling at least one second camera to acquire at least one second image comprises:
and if the first image acquisition instruction comprises the description information aiming at least one appointed view finding object, synchronously controlling at least one second camera to acquire at least one second image aiming at the appointed view finding object based on the description information in the first image acquisition instruction.
5. The method of claim 1, wherein obtaining a first image acquisition instruction comprises at least one of:
responding to the triggering operation of the target image acquisition application, and determining to acquire the first image acquisition instruction;
generating the first image acquisition instruction in response to determining that target interaction data occurs between electronic equipment and target equipment, wherein the electronic equipment is equipment loaded with the camera module;
the camera module enters a target environment area, and the first image acquisition instruction is determined to be obtained;
determining to obtain the first image acquisition instruction in response to the camera module being switched from a first motion state to a second motion state, wherein the motion variation of the camera module in the first motion state is larger than that in the second motion state;
and responding to the camera module to switch from the first pose state to the second pose state, and determining to obtain the first image acquisition instruction.
6. The method of any one of claims 1 to 5, wherein controlling the at least one second camera to acquire the at least one second image comprises at least one of:
If at least two specified view finding objects exist in the first image, controlling at least two second cameras to acquire at least two second images aiming at the two specified view finding objects;
if a specified view finding object exists in the first image, controlling one or at least two second cameras to acquire at least one second image aiming at the specified view finding object;
if an image acquisition instruction aiming at least two appointed view finding objects is obtained, controlling at least two second cameras to acquire at least two second images aiming at the two appointed view finding objects;
if an image acquisition instruction aiming at a single appointed view finding object is obtained, controlling one or at least two second cameras to obtain at least one second image aiming at the appointed view finding object;
and obtaining configuration information of the second cameras, and controlling at least one second camera to obtain at least one second image aiming at the appointed view finding object based on the configuration information and the quantity of the appointed view finding objects.
7. The method of claim 1, wherein the performing corresponding processing on the first image and the at least one second image to obtain a target image capable of being displayed and output to a target display area comprises at least one of the following:
Performing stitching processing on the first image and the at least one second image to obtain a third image which can be displayed and output to a target display area;
superposing the first image and the at least one second image to obtain a fourth image which can be displayed and output to a target display area;
performing embedded processing on the at least one second image and the first image to obtain a fifth image which can be displayed and output to a target display area;
processing the at least one second image into a control capable of being triggered to be displayed by a target operation acting on the first image, and outputting the first image comprising the control as a target image displayed to a target display area;
and processing the first image into a control capable of being triggered to be displayed by target operation on any second image, and outputting the second image comprising the control as a target image displayed to a target display area.
8. The method according to claim 1 or 7, wherein the corresponding processing of the first image and the at least one second image results in a target image capable of being displayed for output to a target display area, comprising at least one of:
Obtaining configuration information of a target display area, and correspondingly processing the first image and the at least one second image based on the configuration information to obtain the target image;
receiving object information of a target image is obtained, and the first image and the at least one second image are correspondingly processed based on the receiving object information to obtain the target image;
and obtaining processing capability information of electronic equipment loaded with the camera module and/or receiving equipment used for receiving the target image, and correspondingly processing the first image and the at least one second image based on the processing capability information to obtain the target image.
9. The utility model provides a camera module, includes first camera and at least a second camera, still includes:
the first control module is used for controlling the first camera to acquire a first image in response to the first image acquisition instruction;
the second control module is used for controlling at least one second camera to acquire at least one second image;
the image processing module is used for carrying out corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area;
The second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
10. An electronic device, including the camera module, this camera module includes:
the device comprises a first camera and at least one second camera;
a processor;
a memory for storing executable program instructions of the processor;
wherein the executable program instructions comprise: responding to a first image acquisition instruction, and controlling a first camera to acquire a first image; controlling at least one second camera to acquire at least one second image; performing corresponding processing on the first image and the at least one second image to obtain a target image which can be displayed and output to a target display area; the second camera is used for acquiring second images, wherein the view finding range of the second camera when acquiring the second images is smaller than the view finding range of the first camera when acquiring the first images, and each second image comprises at least one view finding object existing in the first image.
CN202310484162.6A 2023-04-28 2023-04-28 Control method of camera module, camera module and electronic equipment Pending CN116489504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310484162.6A CN116489504A (en) 2023-04-28 2023-04-28 Control method of camera module, camera module and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310484162.6A CN116489504A (en) 2023-04-28 2023-04-28 Control method of camera module, camera module and electronic equipment

Publications (1)

Publication Number Publication Date
CN116489504A true CN116489504A (en) 2023-07-25

Family

ID=87224915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310484162.6A Pending CN116489504A (en) 2023-04-28 2023-04-28 Control method of camera module, camera module and electronic equipment

Country Status (1)

Country Link
CN (1) CN116489504A (en)

Similar Documents

Publication Publication Date Title
CN107580178B (en) Image processing method and device
WO2021175269A1 (en) Image processing method and mobile terminal
KR101401855B1 (en) Image processing device and image processing method
EP2822267B1 (en) Method and apparatus for previewing a dual-shot image
CN109040474B (en) Photo display method, device, terminal and storage medium
CN109218630B (en) Multimedia information processing method and device, terminal and storage medium
KR102407190B1 (en) Image capture apparatus and method for operating the image capture apparatus
CN114286142B (en) Virtual reality equipment and VR scene screen capturing method
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
JP2019012881A (en) Imaging control device and control method of the same
CN113329172B (en) Shooting method and device and electronic equipment
CN106791390B (en) Wide-angle self-timer real-time preview method and user terminal
CN115484403B (en) Video recording method and related device
CN114125179B (en) Shooting method and device
CN112422812B (en) Image processing method, mobile terminal and storage medium
US9374525B2 (en) Shooting apparatus and shooting method
WO2023174009A1 (en) Photographic processing method and apparatus based on virtual reality, and electronic device
CN112991157B (en) Image processing method, image processing device, electronic equipment and storage medium
KR102501036B1 (en) Method and device for shooting image, and storage medium
CN116489504A (en) Control method of camera module, camera module and electronic equipment
JP7128347B2 (en) Image processing device, image processing method and program, imaging device
CN114285988B (en) Display method, display device, electronic equipment and storage medium
CN117177052B (en) Image acquisition method, electronic device, and computer-readable storage medium
JP7169431B2 (en) Image processing device, image processing method and program, imaging device
CN113873135A (en) Image obtaining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination