CN113691731A - Processing method and device and electronic equipment - Google Patents

Processing method and device and electronic equipment Download PDF

Info

Publication number
CN113691731A
CN113691731A CN202111043677.XA CN202111043677A CN113691731A CN 113691731 A CN113691731 A CN 113691731A CN 202111043677 A CN202111043677 A CN 202111043677A CN 113691731 A CN113691731 A CN 113691731A
Authority
CN
China
Prior art keywords
target
region
area
drawing area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111043677.XA
Other languages
Chinese (zh)
Other versions
CN113691731B (en
Inventor
陈文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111043677.XA priority Critical patent/CN113691731B/en
Publication of CN113691731A publication Critical patent/CN113691731A/en
Application granted granted Critical
Publication of CN113691731B publication Critical patent/CN113691731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/815Camera processing pipelines; Components thereof for controlling the resolution by using a single image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

According to the processing method, the processing device and the electronic equipment, a first drawing area used for corresponding to the acquired information sensed by a first camera in real time is created based on the started first camera, a second drawing area used for corresponding to the output information in real time is created based on the output requirement, and the target acquisition information of the target area including the target position in the first drawing area is determined as the acquisition information of the second drawing area according to the target position and the second drawing area of the target object in the first drawing area, wherein the target position and the second drawing area are included in the acquisition information. By taking the target position of the target object and the second drawing area as the basis, the corresponding part of the acquisition information is determined from the acquisition information of the first drawing area to be used as the acquisition information of the second drawing area, so that the target object can be automatically adapted to the proper position of the second drawing area for corresponding the output information in real time, the user does not need to manually move the equipment, and the camera shake caused by manually moving the equipment is correspondingly avoided.

Description

Processing method and device and electronic equipment
Technical Field
The present application relates to the field of information acquisition and processing, and in particular, to a processing method and apparatus, and an electronic device.
Background
Electronic devices such as smartphones are generally provided with image/video capture functionality to support user image/video capture. However, the current related art generally has the following defects when acquiring image/video information: when images are shot or videos are recorded, the method is limited by a view finding range, in order to ensure that a shooting/recording main body (such as a target person) is always positioned in the center of a picture as far as possible, continuous mobile equipment is required, the use of the equipment by a user is not facilitated, camera shake is inevitably introduced, and the image/video impression is reduced.
Disclosure of Invention
Therefore, the application discloses the following technical scheme:
a method of processing, comprising:
establishing a first drawing area based on the adjusted and started first camera, wherein the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time;
creating a second drawing area based on the output requirement, wherein the second drawing area is used for corresponding to the output information in real time;
analyzing the acquisition information to obtain a target object included in the acquisition information;
determining a corresponding target position of the target object in the first drawing area;
and according to the target position and the second drawing area, determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area, wherein the target area and the second drawing area have the same size.
Optionally, wherein:
the creating of the first drawing area based on the turned-on first camera comprises:
creating a first drawing area based on the resolution of the acquired information of the first camera;
the creating a second drawing region based on the output requirement includes:
creating a second rendering region based on the target resolution limited by the output requirement;
and the target resolution is smaller than the resolution of the acquired information of the first camera.
Optionally, the target resolution is the highest resolution of the display screen, or the resolution of a video file selected and created by a user, or the resolution limited by real-time video transmission.
Optionally, the determining a target position of the target object in the first rendering region includes:
identifying a target object with corresponding object characteristics in the first drawing area by using an artificial intelligence algorithm;
and extracting the corresponding position information of the identified target object in the first drawing area.
Optionally, the first drawing region includes a central region and an edge region;
the determination process of the target area comprises the following steps:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
Optionally, wherein the central area is: in the point set corresponding to the first rendering region, a region whose center is the self point and whose size is the size of the second rendering region is included in a region formed by each point in the first rendering region; the edge region is a region of the first drawing region other than the central region;
the determining the target area according to the center principle includes:
determining an area which takes the target position as the center and has the size of the second drawing area as the target area in the first drawing area;
the determining the target area according to the edge principle includes:
and determining a region which has the size of the second drawing region and contains the target position and the center point of which and the target position meet the approaching condition in the first drawing region as the target region.
Optionally, the method further includes:
determining a target distance between the target object and a device plane of a device where the first camera is located;
and if the target distance meets a distance condition, zooming the first camera according to the target distance.
A processing apparatus, comprising:
the first establishing module is used for establishing a first drawing area based on the started first camera, and the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time;
the second creating module is used for creating a second drawing area based on the output requirement, and the second drawing area is used for corresponding to the output information in real time;
the analysis module is used for analyzing the acquisition information to obtain a target object included in the acquisition information;
a first determining module, configured to determine a corresponding target position of the target object in the first drawing area;
a second determining module, configured to determine, according to the target location and the second drawing region, target acquisition information of a target region including the target location in the first drawing region as acquisition information of the second drawing region, where the target region and the second drawing region have a same size.
Optionally, the first drawing region includes a central region and an edge region;
the second determining module, when determining the target area, is specifically configured to:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
An electronic device comprising at least:
a first camera;
a memory for storing a set of computer instructions;
a processor for implementing a processing method as claimed in any preceding claim by executing a set of instructions stored on a memory.
According to the scheme, the processing method, the processing device and the electronic equipment, a first drawing area used for corresponding to the acquired information sensed by the first camera in real time is created based on the started first camera, a second drawing area used for corresponding to the output information in real time is created based on the output requirement, and the target acquisition information of the target area including the target position in the first drawing area is determined as the acquisition information of the second drawing area according to the target position of the target object in the first drawing area and the second drawing area included in the acquisition information. By taking the target position of the target object and the second drawing area as the basis and determining the corresponding part of the acquisition information from the acquisition information of the first drawing area as the acquisition information of the second drawing area, the target object can be automatically adapted to the appropriate position of the second drawing area for corresponding output information in real time (for example, the target object is positioned at the central position of the second drawing area as far as possible), the user is not required to manually move the equipment, and camera shake caused by manually moving the equipment is correspondingly avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a treatment process provided herein;
FIG. 2 is an example of a created first rendered area provided herein;
FIG. 3 is an example of a created second drawing area provided herein;
4(a) -4 (e) are exemplary diagrams provided herein for determining different target regions according to the principle of centering for different target locations;
FIGS. 5(a) -5 (b) are exemplary diagrams of different candidate regions corresponding to position P provided herein;
FIG. 6 is a schematic diagram of the wide high resolution ratio of the first and second rendering regions provided herein;
FIG. 7 is another schematic flow diagram of the treatment method provided herein;
FIG. 8 is a schematic diagram of the structure of a processing apparatus provided herein;
fig. 9 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to achieve a good image shooting/video recording effect without manually moving the device by a user and to enable a target object to be located in a picture center as much as possible, the application discloses a processing method, a processing device and electronic equipment.
The method and apparatus may be applied to an electronic device with a camera (i.e., the electronic device disclosed herein), which may be, but is not limited to, any of numerous general purpose or special purpose computing device environments or configurations, such as: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor appliances, and the like.
The processing procedure of the processing method disclosed by the application is shown in fig. 1, and specifically includes:
step 101, creating a first drawing area based on the adjusted and started first camera, wherein the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time.
The acquisition information sensed by the first camera is an image shot by the first camera or a recorded video image frame.
The electronic equipment adjusts and starts the first camera based on an image shooting instruction used for indicating to shoot images or a video recording instruction used for indicating to record videos, wherein the image shooting instruction is triggered manually by a user or automatically by the equipment.
In the embodiment of the application, the visual angle range when the first camera collects information (image/video image frames) is larger than the visual angle range matched with the output information (image/video image frames), and the resolution of the collected information sensed by the first camera is correspondingly larger than the target resolution of the output information limited by the output requirement.
Preferably, the first camera is a wide-angle camera.
The output requirement is limited to the target resolution of the output information, i.e. the resolution required when outputting an image or video image frame. The target resolution may be, but is not limited to, the highest resolution of the display screen, or the resolution of the video file that the user chooses to create, or the resolution limited by real-time video transmission (e.g., real-time video transmission in a video call/video conference), depending on the particular use scenario.
Responding to an image shooting instruction or a video recording instruction, the electronic equipment starts the first camera, and creates a first drawing area used for corresponding to the acquired information sensed by the first camera in real time based on the started first camera. The size of the first rendering area matches the resolution of the acquired information (image or video image frame) sensed by the first camera.
Referring to the example of fig. 2, the first rendering region is represented as a region Preview1 divided into five equal parts in the height and width directions, respectively.
And 102, creating a second drawing area based on the output requirement, wherein the second drawing area is used for corresponding to the output information in real time.
The output information is an image which is imaged and output in an image shooting scene, or a video image frame which is output to an encoder in a video recording scene and is to be encoded and stored to form a video file or a video image frame which is to be transmitted in real time (a video image is transmitted to opposite-end equipment in a video call/video conference).
In response to an image capturing instruction or a video recording instruction, the present embodiment creates a second drawing area for corresponding output information in real time based on an output request, in addition to creating a first drawing area based on the first camera.
Specifically, the present embodiment creates the second drawing region based on the target resolution limited by the output requirement.
The size of the second drawing area is correspondingly matched with the target resolution of the output information such as the image/video image frame and the like limited by the output requirement, and the size of the second drawing area is correspondingly smaller than that of the first drawing area because the target resolution limited by the output requirement is smaller than the resolution of the acquisition information (image or video image frame) of the first camera.
Referring to the example of fig. 3, the second drawing region is represented as a region Preview2 trisected in the height and width directions, respectively, and Preview1 corresponds to a same-sized aliquot unit in the same direction as Preview 2.
And 103, analyzing the acquisition information of the first camera to obtain a target object included in the acquisition information.
The target object is a subject to be shot with emphasis in a scene such as a photograph or a video recording, and the subject to be shot with emphasis such as a person, an animal, or a scene is used as an image subject. In implementation, the target object in the first drawing area may be identified according to an object feature of an object indicated by a user's designated operation (e.g., a person object specified by the user in a photo preview or a video preview, etc.), or a key shooting object in an image or a video image frame may be identified as the target object by the electronic device based on an algorithm.
In the video recording scene, preferably, a target object having a pre-learned corresponding object feature in each video image frame captured by the first camera may be identified in real time by using a corresponding Artificial Intelligence (AI) algorithm. The object features may include, but are not limited to, facial features and/or skeletal features of a human or animal subject.
Preferably, the facial features include facial features corresponding to different facial expressions of a subject such as a person or an animal, and the skeletal features include skeletal features corresponding to different poses of the subject such as a person or an animal.
And step 104, determining a corresponding target position of the target object in the first drawing area.
And then, further determining the corresponding position information of the target object in the first drawing area to obtain the target position. The target position may be, but is not limited to, a position represented in pixel coordinate points.
And 105, according to the target position and the second drawing area, determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area, wherein the target area and the second drawing area have the same size.
After the target position of the target object corresponding to the first drawing area is determined, a target area which comprises the target position and has the same size as the second drawing area is determined from the first drawing area according to the target position and the second drawing area, and target acquisition information corresponding to the determined target area is used as acquisition information of the second drawing area.
Wherein the first drawing region includes a center region and an edge region.
In this embodiment, the center area of the first drawing area is set to: the point set corresponding to the first rendering area may include an area formed by each point within the first rendering area, the area being centered on the self point and having a size equal to that of the second rendering area.
Accordingly, the edge area of the first drawing area is set to: a region other than the central region in the first drawing region.
When the target area is determined, if the target position of the target object belongs to the central area of the first drawing area, determining the target area according to a central principle; otherwise, if the target position belongs to the edge area of the first rendering area, the target area is determined according to the edge principle.
Wherein, the target area is determined according to the center principle, and can be further realized as follows: and determining an area which takes the target position corresponding to the target object as the center and has the size of the second drawing area as the target area.
For example, if the target position of the target object corresponding to the first drawing area is the center position O of the first drawing area, as shown in fig. 4(a), the target area may be determined as an area _1 centered on O and having the same size as the second drawing area in the first drawing area on the basis of the center principle, and referring to fig. 4(b), if the target position of the target object corresponding to the first drawing area is the position a of the first drawing area, the area _2 centered on a position a and having the same size as the second drawing area in the first drawing area may be determined as the target area. Similarly, fig. 4(c) -4 (e) respectively provide schematic diagrams of the corresponding determined target areas area _3, area _4, and area _5 in the case where the target position is the position B, C, D in the first drawing area.
The process of determining the target area according to the edge principle can be further implemented as follows: and determining a region which has the size of the second drawing region and contains the target position and the center point of which meets the approaching condition with the target position as a target region in the first drawing region.
Preferably, the above proximity condition is used to characterize that the distance between the center point of the region and the target position is the shortest distance among the distances between the center point of each candidate region and the target position in the candidate region set; the candidate region is a region of the first rendering region, which has a size equal to that of the second rendering region and includes the target position.
Taking a target position of the target object in the first rendering area as a position P, the candidate areas in the first rendering area having a size of the second rendering area and including the position P are an area _ x and an area _ y, respectively see fig. 5(a) and 5(b), where a distance between a center of the area _ x and the position P is smaller than a distance between a center of the area _ y and the position P, thereby determining the area _ x as the target area.
On the basis of determining the target area, further using the acquisition information corresponding to the target area (i.e., the target acquisition information) as the acquisition information of the second drawing area, where the acquisition information of the second drawing area is used as output information when the information is output.
In an embodiment, the acquisition information sensed by the first camera is image information in a photographed scene, and in the embodiment, an image corresponding to the target area (which is essentially a part of the image acquired by the first camera) is used as an image of the second drawing area, so that the image of the second drawing area (for a user to view the photographed image) can be output in real time on a display screen interface of the device, and the image is saved in the device.
In another embodiment, the captured information sensed by the first camera is a video image frame in a video recording scene (e.g., a video capture scene, a video call/video conference scene, etc.), in this embodiment, a portion of the video image frame captured by the first camera, which is located in a target area, is used as a video image frame of a second drawing area, and accordingly, the video image frame of the second drawing area obtained in the recording process (for a user to view in the recording process) can be output in real time on a display screen interface of the device, and a subsequent process matching the current scene is performed, for example, in the video call/video conference scene, the video image frame of the second drawing area is transmitted to an opposite device participating in the call/conference in real time, in the video file recording scene, the video image frame of the second drawing area is transmitted to a video encoder, and the video encoder performs an encoding process on the video image frame of the second drawing area, forming a recorded video file, etc.
In implementation, based on the image cutting means, a part located in the target area can be cut out from the image or video image frame corresponding to the first drawing area as an image or video image frame of the second drawing area; alternatively, a portion of the image or video image frame corresponding to the first drawing region, which is located in the target region, may be directly extracted as the image or video image frame of the second drawing region based on the information extraction technique without performing any clipping operation.
According to the method, the wide-angle camera is used as the first camera, the advantage of wide visual angle of the wide-angle camera is fully utilized, the image or video image frame with the resolution ratio larger than the target resolution ratio limited by the output requirement is acquired, the acquisition information of the target area containing the target position is selected from the first drawing area in a targeted mode according to a center principle or an edge principle and is used as the acquisition information of the second drawing area, the target object can be automatically adapted to the center position of the second drawing area used for corresponding output information in real time or the position as close to the center as possible, a user does not need to manually move the equipment, and accordingly camera shake caused by manual movement of the equipment is avoided.
Based on the embodiment of the application, in a video recording scene, the electronic device can automatically complete image region cropping or information extraction on each video image frame by taking a target object (such as a human object) as a center (or as far as possible) as the center, and form a follow-up image region cropping/extraction effect by taking the target object as the center (or as far as possible) along with continuous acquisition of the video image frames by the first camera, so that the target object in the recorded video file or a series of video image frames transmitted in real time is always in the center of the picture (or as far as possible in the center of the picture), moreover, because the wide-angle camera can cover a wider visual field, a photographer does not need to move the equipment for keeping up with the motion of a photographed target object (such as a person object), and accordingly video picture jitter caused by the mobile equipment is avoided.
In an embodiment, referring to fig. 6 in combination, assuming that the width and the high resolution of the second rendering region are given and are denoted as W2 and H2, respectively, and that the scaling factor Kw of the width direction resolution of the first rendering region and the width direction resolution of the second rendering region and the scaling factor Kh of the height direction resolution of the first rendering region and the height direction resolution of the second rendering region are given, the original output resolution of the first camera, such as the wide-angle camera, is required to be:
W1=W2*Kw;
H1=H2*Kh。
w1 and H1 represent the width and the high resolution of the first drawing region, respectively.
Taking the example that the resolution of the video image frame of the video file to be output when recording the video is FHD (Full High Definition), i.e. 1920(W2) × 1080(H2), and the scaling factors Kw and Kh are 5/3, then:
W1=1920*5/3=3200;
H1=1080*5/3=1800。
that is, the original output resolution of the first camera, such as a wide-angle camera, is required to be 3200,1800 wide and 1800 high, so as to ensure that the resolution of the video image frame of the recorded video is correct, and the user does not feel blurred due to the processes of cutting and zooming the picture based on the first drawing area and the second drawing area.
In an embodiment, referring to the flowchart of the processing method provided in fig. 7, the processing method disclosed in the present application may further include:
step 701, determining a target distance between a target object and a device plane of a device where the first camera is located.
Specifically, the distance between the target object and the equipment plane of the equipment where the first camera is located can be detected according to a corresponding distance measurement technology, so that the target distance is obtained. The ranging technique may be, but is not limited to, Time of flight (TOF) and the like.
And step 702, if the target distance meets the distance condition, zooming the first camera according to the target distance.
The distance condition here is a condition for representing that the distance between the object and the device plane of the device in which the first camera is located is too close or too far, so that the first camera cannot clearly image the object. Specifically, but not limited to, the following settings are set: the distance between the object and the equipment plane of the equipment where the first camera is located is larger than a preset first threshold value, or smaller than a preset second threshold value, wherein the second threshold value is smaller than the first threshold value.
Under the condition that the target distance meets the distance condition, zooming processing is carried out on the first camera, the distance between the target object and the equipment plane is automatically judged based on the distance condition according to the distance change of the target object from the electronic equipment, and under the condition that the distance condition is judged to be met, the focal length of the first camera is adjusted to be a proper distance according to the distance between the target object and the equipment plane, so that the user is not required to manually execute zooming operation, clear imaging of the target object can be realized, meanwhile, hand trembling and picture jumping feeling during manual zooming are avoided, and the shooting effect is better.
Corresponding to the processing method, an embodiment of the present application further discloses a processing apparatus, a composition structure of which is shown in fig. 8, and the processing apparatus specifically includes:
a first creating module 801, configured to create a first drawing area based on a turned-on first camera, where the first drawing area is used to correspond to acquired information sensed by the first camera in real time;
a second creating module 802, configured to create a second drawing area based on the output requirement, where the second drawing area is used for corresponding to the output information in real time;
an analyzing module 803, configured to analyze the collected information to obtain a target object included in the collected information;
a first determining module 804, configured to determine a corresponding target position of the target object in the first drawing area;
a second determining module 805, configured to determine, according to the target location and the second drawing area, target acquisition information of a target area including the target location in the first drawing area as acquisition information of the second drawing area, where the target area and the second drawing area have the same size.
In an embodiment, the first creating module 801 is specifically configured to: creating a first drawing area based on the resolution of the acquired information of the first camera;
the second creating module 802 is specifically configured to: creating a second rendering region based on the target resolution limited by the output requirement;
and the target resolution is smaller than the resolution of the acquired information of the first camera.
In one embodiment, the target resolution is the highest resolution of the display screen, or the resolution of the video file selected by the user to be created, or the resolution limited by the real-time video transmission.
In an embodiment, the first determining module 804 is specifically configured to: identifying a target object with corresponding object characteristics in the first drawing area by using an artificial intelligence algorithm; and extracting the corresponding position information of the identified target object in the first drawing area.
In an embodiment, the first drawing region includes a center region and an edge region;
the second determining module 805, when determining the target area, is specifically configured to:
if the target position belongs to the central area of the first drawing area, determining the target area according to a central principle; and if the target position belongs to the edge area of the first drawing area, determining the target area according to an edge principle.
In one embodiment, the central region of the first drawing region is: in the point set corresponding to the first rendering region, a region whose center is the self point and whose size is the size of the second rendering region is included in a region formed by each point in the first rendering region; the edge area of the first drawing area is an area except the central area in the first drawing area;
the second determining module 805, when determining the target area according to the center principle, is specifically configured to:
determining a region which takes the target position as the center and has the size of the second drawing region as the target region in the first drawing region;
when determining the target area according to the edge principle, the second determining module 805 is specifically configured to:
and determining a region which has the size of the second drawing region and contains the target position and the center point of which and the target position meet the approaching condition in the first drawing region as a target region.
In an embodiment, the apparatus further includes:
a zoom processing module to: determining a target distance between a target object and an equipment plane of equipment where a first camera is located; and if the determined target distance meets the distance condition, zooming the first camera according to the target distance.
The processing apparatus disclosed in the embodiments of the present application is relatively simple in description because it corresponds to the processing method disclosed in the above embodiments of the method, and for the relevant similarities, please refer to the description of the above corresponding method embodiments, and the detailed description is omitted here.
The embodiment of the present application further discloses an electronic device, a composition structure of which is shown in fig. 9, including:
a first camera 901;
a memory 902 for storing at least one set of instructions.
The set of instructions may be embodied in the form of a computer program.
A processor 903 for implementing the image processing method as disclosed in any of the above method embodiments by executing the instruction set in the memory.
The processor 903 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device.
Besides, the electronic device may further include a communication interface, a communication bus, and the like. The memory, the processor and the communication interface communicate with each other via a communication bus.
The communication interface is used for communication between the electronic device and other devices. The communication bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like, and may be divided into an address bus, a data bus, a control bus, and the like.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
For convenience of description, the above system or apparatus is described as being divided into various modules or units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method of processing, comprising:
establishing a first drawing area based on the adjusted and started first camera, wherein the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time;
creating a second drawing area based on the output requirement, wherein the second drawing area is used for corresponding to the output information in real time;
analyzing the acquisition information to obtain a target object included in the acquisition information;
determining a corresponding target position of the target object in the first drawing area;
and according to the target position and the second drawing area, determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area, wherein the target area and the second drawing area have the same size.
2. The method of claim 1, wherein:
the creating of the first drawing area based on the turned-on first camera comprises:
creating a first drawing area based on the resolution of the acquired information of the first camera;
the creating a second drawing region based on the output requirement includes:
creating a second rendering region based on the target resolution limited by the output requirement;
and the target resolution is smaller than the resolution of the acquired information of the first camera.
3. The method of claim 2, wherein the target resolution is a highest resolution of a display screen, or a resolution of a video file selected by a user to be created, or a resolution limited by real-time video transmission.
4. The method of claim 1, the determining a corresponding target position of the target object in the first rendering region, comprising:
identifying a target object with corresponding object characteristics in the first drawing area by using an artificial intelligence algorithm;
and extracting the corresponding position information of the identified target object in the first drawing area.
5. The method of claim 1, the first drawing region comprising a center region and an edge region;
the determination process of the target area comprises the following steps:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
6. The method of claim 5, wherein the central region is: in the point set corresponding to the first rendering region, a region whose center is the self point and whose size is the size of the second rendering region is included in a region formed by each point in the first rendering region; the edge region is a region of the first drawing region other than the central region;
the determining the target area according to the center principle includes:
determining an area which takes the target position as the center and has the size of the second drawing area as the target area in the first drawing area;
the determining the target area according to the edge principle includes:
and determining a region which has the size of the second drawing region and contains the target position and the center point of which and the target position meet the approaching condition in the first drawing region as the target region.
7. The method of claim 1, further comprising:
determining a target distance between the target object and a device plane of a device where the first camera is located;
and if the target distance meets a distance condition, zooming the first camera according to the target distance.
8. A processing apparatus, comprising:
the first establishing module is used for establishing a first drawing area based on the started first camera, and the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time;
the second creating module is used for creating a second drawing area based on the output requirement, and the second drawing area is used for corresponding to the output information in real time;
the analysis module is used for analyzing the acquisition information to obtain a target object included in the acquisition information;
a first determining module, configured to determine a corresponding target position of the target object in the first drawing area;
a second determining module, configured to determine, according to the target location and the second drawing region, target acquisition information of a target region including the target location in the first drawing region as acquisition information of the second drawing region, where the target region and the second drawing region have a same size.
9. The apparatus of claim 8, the first drawing region comprising a center region and an edge region;
the second determining module, when determining the target area, is specifically configured to:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
10. An electronic device comprising at least:
a first camera;
a memory for storing a set of computer instructions;
a processor for implementing the processing method of any one of claims 1 to 7 by executing a set of instructions stored on a memory.
CN202111043677.XA 2021-09-07 2021-09-07 Processing method and device and electronic equipment Active CN113691731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111043677.XA CN113691731B (en) 2021-09-07 2021-09-07 Processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111043677.XA CN113691731B (en) 2021-09-07 2021-09-07 Processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113691731A true CN113691731A (en) 2021-11-23
CN113691731B CN113691731B (en) 2023-06-23

Family

ID=78585469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111043677.XA Active CN113691731B (en) 2021-09-07 2021-09-07 Processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113691731B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050058369A1 (en) * 2003-09-09 2005-03-17 Fuji Photo Film Co., Ltd. Apparatus, method and program for generating photo card data
CN104952027A (en) * 2014-10-11 2015-09-30 腾讯科技(北京)有限公司 Face-information-contained picture cutting method and apparatus
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN110191369A (en) * 2019-06-06 2019-08-30 广州酷狗计算机科技有限公司 Image interception method, apparatus, equipment and storage medium
CN111524145A (en) * 2020-04-13 2020-08-11 北京智慧章鱼科技有限公司 Intelligent picture clipping method and system, computer equipment and storage medium
CN112700454A (en) * 2020-12-28 2021-04-23 北京达佳互联信息技术有限公司 Image cropping method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050058369A1 (en) * 2003-09-09 2005-03-17 Fuji Photo Film Co., Ltd. Apparatus, method and program for generating photo card data
CN104952027A (en) * 2014-10-11 2015-09-30 腾讯科技(北京)有限公司 Face-information-contained picture cutting method and apparatus
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN110191369A (en) * 2019-06-06 2019-08-30 广州酷狗计算机科技有限公司 Image interception method, apparatus, equipment and storage medium
CN111524145A (en) * 2020-04-13 2020-08-11 北京智慧章鱼科技有限公司 Intelligent picture clipping method and system, computer equipment and storage medium
CN112700454A (en) * 2020-12-28 2021-04-23 北京达佳互联信息技术有限公司 Image cropping method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113691731B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US8810673B2 (en) Composition determination device, composition determination method, and program
JP4639869B2 (en) Imaging apparatus and timer photographing method
KR20180084085A (en) METHOD, APPARATUS AND ELECTRONIC DEVICE
CN110650291B (en) Target focus tracking method and device, electronic equipment and computer readable storage medium
CN113973190A (en) Video virtual background image processing method and device and computer equipment
CN103685940A (en) Method for recognizing shot photos by facial expressions
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN109002796B (en) Image acquisition method, device and system and electronic equipment
US10698297B2 (en) Method for automatically focusing on specific target object, photographic apparatus including automatic focus function, and computer readable storage medium for storing automatic focus function program
JP2007074143A (en) Imaging device and imaging system
CN110493512B (en) Photographic composition method, photographic composition device, photographic equipment, electronic device and storage medium
CN110213492B (en) Device imaging method and device, storage medium and electronic device
CN113630549B (en) Zoom control method, apparatus, electronic device, and computer-readable storage medium
CN114390201A (en) Focusing method and device thereof
CN112036311A (en) Image processing method and device based on eye state detection and storage medium
CN112887610A (en) Shooting method, shooting device, electronic equipment and storage medium
JP2018081402A (en) Image processing system, image processing method, and program
CN109981967B (en) Shooting method and device for intelligent robot, terminal equipment and medium
CN113283319A (en) Method and device for evaluating face ambiguity, medium and electronic equipment
CN107578006B (en) Photo processing method and mobile terminal
CN112219218A (en) Method and electronic device for recommending image capture mode
CN113691731B (en) Processing method and device and electronic equipment
US10282633B2 (en) Cross-asset media analysis and processing
CN112995503B (en) Gesture control panoramic image acquisition method and device, electronic equipment and storage medium
CN114390206A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant