CN113691731B - Processing method and device and electronic equipment - Google Patents

Processing method and device and electronic equipment Download PDF

Info

Publication number
CN113691731B
CN113691731B CN202111043677.XA CN202111043677A CN113691731B CN 113691731 B CN113691731 B CN 113691731B CN 202111043677 A CN202111043677 A CN 202111043677A CN 113691731 B CN113691731 B CN 113691731B
Authority
CN
China
Prior art keywords
target
area
drawing area
region
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111043677.XA
Other languages
Chinese (zh)
Other versions
CN113691731A (en
Inventor
陈文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111043677.XA priority Critical patent/CN113691731B/en
Publication of CN113691731A publication Critical patent/CN113691731A/en
Application granted granted Critical
Publication of CN113691731B publication Critical patent/CN113691731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/815Camera processing pipelines; Components thereof for controlling the resolution by using a single image

Abstract

According to the processing method, the processing device and the electronic equipment, a first drawing area for real-time corresponding to acquisition information sensed by the first camera is created based on the started first camera, a second drawing area for real-time corresponding to output information is created based on output requirements, and target acquisition information of a target area including a target position in the first drawing area is determined as acquisition information of the second drawing area according to the target position of a target object included in the acquisition information in the first drawing area and the second drawing area. By taking the target position of the target object and the second drawing area as the basis, corresponding part of the acquired information is determined from the acquired information of the first drawing area and used as the acquired information of the second drawing area, the target object can be automatically adapted to a proper position of the second drawing area for outputting information correspondingly in real time conveniently, manual movement of equipment by a user is not needed, and camera shake caused by manual movement of the equipment is correspondingly avoided.

Description

Processing method and device and electronic equipment
Technical Field
The application belongs to the field of information acquisition and processing, and particularly relates to a processing method, a processing device and electronic equipment.
Background
Electronic devices such as smartphones are typically provided with image/video capture functionality that supports image/video capture by a user. However, the current related art generally has the following drawbacks in acquiring image/video information: when shooting an image or recording a video, the camera is limited by a view finding range, so that a shooting/recording main body (such as a target person) is always positioned in the center of a picture as far as possible, and a continuous mobile device is required, so that the use of the device by a user is not facilitated, camera shake is inevitably introduced, and the look and feel of the image/video is reduced.
Disclosure of Invention
Therefore, the application discloses the following technical scheme:
a method of processing, comprising:
creating a first drawing area based on a turned-on first camera, wherein the first drawing area is used for corresponding to acquired information sensed by the first camera in real time;
creating a second drawing area based on the output requirement, wherein the second drawing area is used for outputting information correspondingly in real time;
analyzing the acquired information to obtain a target object included in the acquired information;
determining a corresponding target position of the target object in the first drawing area;
and determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area according to the target position and the second drawing area, wherein the target area and the second drawing area have the same size.
Optionally, wherein:
the first camera based on the tuning creates a first drawing area, which comprises the following steps:
creating a first drawing area based on the resolution of the acquired information of the first camera;
the creating a second drawing area based on the output requirement includes:
creating a second drawing region based on the target resolution limited by the output requirement;
the target resolution is smaller than the resolution of the acquired information of the first camera.
Optionally, the target resolution is the highest resolution of the display screen, or the resolution of the video file selected to be created by the user, or the resolution limited by real-time video transmission.
Optionally, the determining the corresponding target position of the target object in the first drawing area includes:
identifying a target object with corresponding object characteristics in the first drawing area by utilizing an artificial intelligence algorithm;
and extracting the position information corresponding to the identified target object in the first drawing area.
Optionally, the first drawing area includes a center area and an edge area;
the determining process of the target area comprises the following steps:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
Optionally, the central area is: the dot concentration corresponding to the first drawing region enables the region which takes the self dot as the center and has the size of the second drawing region to be contained in the region formed by each dot in the first drawing region; the edge region is a region other than the center region in the first drawing region;
the determining the target area according to the center principle comprises the following steps:
determining, as the target area, an area centered on the target position and having a size that is the size of the second drawing area in the first drawing area;
the determining the target area according to the edge principle comprises the following steps:
and determining a region with the size of the second drawing region, including the target position and having a region center point meeting a proximity condition with the target position, from the first drawing region as the target region.
Optionally, the method further comprises:
determining a target distance between the target object and a device plane of the device where the first camera is located;
and if the target distance meets the distance condition, carrying out zooming processing on the first camera according to the target distance.
A processing apparatus, comprising:
the first creating module is used for creating a first drawing area based on the started first camera, and the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time;
the second creating module is used for creating a second drawing area based on the output requirement, and the second drawing area is used for corresponding to the output information in real time;
the analysis module is used for analyzing the acquired information to obtain a target object included in the acquired information;
the first determining module is used for determining a corresponding target position of the target object in the first drawing area;
and the second determining module is used for determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area according to the target position and the second drawing area, wherein the target area and the second drawing area have the same size.
Optionally, the first drawing area includes a center area and an edge area;
the second determining module is specifically configured to, when determining the target area:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
An electronic device comprising at least:
a first camera;
a memory for storing a set of computer instructions;
a processor for implementing a processing method as claimed in any preceding claim by executing a set of instructions stored on a memory.
As can be seen from the above solutions, according to the processing method, apparatus, and electronic device disclosed in the present application, a first drawing area for real-time corresponding to acquisition information sensed by a first camera is created based on a tuned-on first camera, and a second drawing area for real-time corresponding to output information is created based on an output requirement, and target acquisition information of a target area including a target position in the first drawing area is determined as acquisition information of the second drawing area according to a target position of a target object included in the acquisition information in the first drawing area and the second drawing area. By determining the corresponding part of the acquired information from the acquired information of the first drawing area as the acquired information of the second drawing area based on the target position of the target object and the second drawing area, the target object can be conveniently and automatically adapted to a proper position of the second drawing area for outputting information correspondingly in real time (for example, the target object is positioned at the center position of the second drawing area as far as possible), manual movement of the equipment is not required by a user, and camera shake caused by manual movement of the equipment is correspondingly avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic flow chart of the treatment method provided in the present application;
FIG. 2 is one example of a created first drawing area provided herein;
FIG. 3 is one example of a created second drawing area provided herein;
FIGS. 4 (a) -4 (e) are exemplary diagrams provided herein for determining different target areas on a center-by-center basis for different target locations;
FIGS. 5 (a) -5 (b) are exemplary diagrams of different candidate regions corresponding to position P provided herein;
FIG. 6 is a schematic diagram of the wide high resolution ratio of the first and second drawing areas provided in the present application;
FIG. 7 is another flow diagram of the treatment method provided herein;
FIG. 8 is a schematic view of the processing apparatus provided herein;
fig. 9 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In order to achieve a good image shooting/video recording effect without manual movement of the device by a user, and enable a target object to be located in the center of a picture as much as possible, the application discloses a processing method, a processing device and electronic equipment.
The method and apparatus may be applied to electronic devices with cameras (i.e., electronic devices disclosed herein), which may be, but are not limited to, devices in a wide variety of general or special purpose computing device environments or configurations, such as: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, and the like.
The processing procedure of the processing method disclosed in the application is shown in fig. 1, and specifically includes:
step 101, a first drawing area is created based on a turned-on first camera, and the first drawing area is used for corresponding to acquired information sensed by the first camera in real time.
The acquired information sensed by the first camera is an image shot by the first camera or a recorded video image frame.
The electronic device invokes the first camera based on an image shooting instruction for indicating shooting an image or a video recording instruction for indicating recording a video, which are manually triggered by a user or automatically triggered by the device.
In this embodiment of the present application, when the first camera collects information (image/video image frame), the viewing angle range is greater than the viewing angle range matched with the output information (image/video image frame), and the resolution of the collected information sensed by the first camera is correspondingly greater than the target resolution of the output information limited by the output requirement.
Preferably, the first camera is a wide angle camera.
The target resolution of the output information limited by the output requirement is the resolution required when outputting an image or video image frame. The target resolution may be, but is not limited to, the highest resolution of the display screen, or the resolution of a video file that the user has selected to be created, or the resolution limited by real-time video transmission (e.g., real-time video transmission in a video call/video conference), depending on the particular use scenario.
And responding to the image shooting instruction or the video recording instruction, the electronic equipment starts the first camera in a calling mode, and creates a first drawing area for real-time corresponding to the acquired information sensed by the first camera based on the started first camera. The size of the first rendering area matches the resolution of the acquired information (image or video image frame) sensed by the first camera.
Referring to the example of fig. 2, the first drawing region is represented as a region Preview1 divided into five equally in the height and width directions, respectively.
And 102, creating a second drawing area based on the output requirement, wherein the second drawing area is used for outputting information correspondingly in real time.
The output information is an image which is imaged and output in an image shooting scene, or is a video image frame which is output to an encoder in a video recording scene to be encoded and stored to form a video file or is transmitted in real time (video frames are transmitted to opposite-end equipment in real time in video call/video conference).
In response to the image capturing instruction or the video recording instruction, the present embodiment creates a second drawing area for real-time corresponding output information based on the output requirement in addition to the first drawing area based on the first camera.
Specifically, the present embodiment creates the second drawing region based on the target resolution limited by the output requirement.
The size of the second drawing area is correspondingly matched with the target resolution of output information such as images/video image frames and the like limited by the output requirement, and the size of the second drawing area is correspondingly smaller than the size of the first drawing area because the target resolution limited by the output requirement is smaller than the resolution of acquired information (images or video image frames) of the first camera.
Referring to the example of fig. 3, the second drawing region is represented as a region Preview2 which is trisected in the height and width directions, and Preview1 corresponds to equal-sized bisectors in the same direction as Preview 2.
Step 103, analyzing the acquired information of the first camera to obtain a target object included in the acquired information.
The target object is a subject to be photographed in a scene such as a photograph or video recording, for example, a person, an animal, or a scene, which is a subject to be photographed in focus. In implementation, the target object in the first drawing area may be identified according to the object feature of the object indicated by the user's specified operation (e.g., the person object marked in the photo preview or video preview by the user), or the electronic device may identify, as the target object, the key shooting object in the image or video image frame based on an algorithm.
In the video recording scene, preferably, a corresponding artificial intelligence (Artificial Intelligence, AI) algorithm can be utilized to identify in real time the target object with the corresponding object feature learned in advance in each video image frame acquired by the first camera. The object features may include, but are not limited to, facial features and/or skeletal features of objects such as humans or animals, and the like.
Preferably, the facial features include facial features corresponding to different facial expressions of an object such as a person or animal, and the skeletal features include skeletal features corresponding to different poses of the object such as a person or animal.
Step 104, determining a corresponding target position of the target object in the first drawing area.
And then, further determining the corresponding position information of the target object in the first drawing area to obtain the target position. The target location may be, but is not limited to, a location represented by a pixel coordinate point.
And 105, determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area according to the target position and the second drawing area, wherein the target area and the second drawing area have the same size.
After determining the corresponding target position of the target object in the first drawing area, the step specifically determines a target area which comprises the target position and has the same size as the second drawing area from the first drawing area according to the target position and the second drawing area, and takes the target acquisition information corresponding to the determined target area as the acquisition information of the second drawing area.
Wherein the first drawing area includes a center area and an edge area.
In the present embodiment, the center area of the first drawing area is set as: the dot concentration corresponding to the first drawing region can include a region formed by the dots in the first drawing region, the region being centered on the dot and having a size equal to the size of the second drawing region.
Accordingly, the edge area of the first drawing area is set as: the first drawing area is an area other than the center area.
When the target area is determined, if the target position of the target object belongs to the central area of the first drawing area, determining the target area according to a central principle; otherwise, if the target position belongs to the edge area of the first drawing area, determining the target area according to the edge principle.
Wherein, the target area is determined according to the center principle, which can be further realized as: and determining an area which takes the target position corresponding to the target object as a center and has the size of the second drawing area in the first drawing area as a target area.
For example, if the target position of the target object corresponding to the first drawing region is the center position O of the first drawing region, as shown in fig. 4 (a), the region area_1 having the target region centered on O and the same size as the second drawing region in the first drawing region may be determined based on the center principle, and if the target position of the target object corresponding to the first drawing region is the position a of the first drawing region, as compared with fig. 4 (b), the region area_2 having the target region centered on the position a and the same size as the second drawing region in the first drawing region may be determined as the target region. Similarly, fig. 4 (c) -4 (e) provide schematic diagrams corresponding to the determined target areas area_3, area_4, area_5 in the case where the target position is the position B, C, D in the first drawing area, respectively.
The process of determining the target area according to the edge principle can be further implemented as: and determining a region which has the size of the second drawing region and contains the target position and the center point of which meets the approaching condition with the target position from the first drawing region as a target region.
Preferably, the above proximity condition is used to characterize that a distance between a center point of the region and the target position is a shortest distance among distances between center points of the candidate regions in the candidate region set and the target position; the candidate region is a region which has a size of the second drawing region in the first drawing region and contains the target position.
Taking a position P as an example of a target position corresponding to the target object in the first drawing region, a candidate region having a size equal to that of the second drawing region and including the position P is a region area_x and a region area_y, see fig. 5 (a) and 5 (b), respectively, wherein a distance between a center of the region area_x and the position P is smaller than a distance between a center of the region area_y and the position P, thereby determining the region area_x as the target region.
On the basis of determining the target area, collecting information corresponding to the target area (namely, target collecting information) is further used as collecting information of a second drawing area, and the collecting information of the second drawing area is used as output information when information is output.
In an embodiment, the acquired information sensed by the first camera is image information in a shooting scene, in this embodiment, an image corresponding to the target area (essentially, a part of the image acquired by the first camera) is taken as an image of the second drawing area, and accordingly, the image of the second drawing area can be output on a display screen interface of the device in real time (for a user to view the shot image), and the image is stored in the device.
In another embodiment, the acquired information sensed by the first camera is a video image frame in a video recording scene (such as a video acquisition scene, a video call/video conference scene, etc.), in this embodiment, a portion of the video image frame acquired by the first camera located in the target area is used as a video image frame of the second drawing area, and accordingly, the video image frame of the second drawing area obtained in the recording process can be output in real time on a display screen interface of the device (for a user to view in the recording process), and subsequent processing matched with the current scene is performed, for example, in the video call/video conference scene, the video image frame of the second drawing area is transmitted to an opposite-end device participating in the call/conference in real time, in the video file scene, the video image frame of the second drawing area is transmitted to a video encoder, and the video encoder performs encoding processing on the video image frame of the second drawing area to form a recorded video file, etc.
In practice, a portion located in the target area may be cut out from an image or video image frame corresponding to the first drawing area based on the image cutting means, as an image or video image frame of the second drawing area; alternatively, the portion of the image or video image frame corresponding to the first drawing region located at the target region may be directly extracted as the image or video image frame of the second drawing region based on the information extraction technique without performing any clipping operation.
According to the method, the wide-angle camera is used as the first camera, the wide-angle visual angle advantage of the wide-angle camera is fully utilized, the image or video image frame with the resolution larger than the target resolution limited by the output requirement is collected, the target position of the target object in the first drawing area corresponding to the first camera is used as the basis, the collected information of the target area containing the target position is selected from the first drawing area in a targeted manner according to the center principle or the edge principle to serve as the collected information of the second drawing area, the automatic adaptation of the target object to the center position of the second drawing area for outputting information correspondingly in real time or the position which is as close to the center as possible can be achieved, manual movement of equipment by a user is not needed, and camera shake caused by manual movement of equipment is correspondingly avoided.
According to the embodiment of the application, in a video recording scene, the electronic device can automatically complete image region cutting or information extraction on each video image frame taking a target object (such as a person object) as a center (or as much as possible) aiming at each video image frame acquired by the first camera, and along with continuous acquisition of the video image frames by the first camera, a following image region cutting/extraction effect taking the target object as a center (or as much as possible) is formed, so that the target object in a video file obtained by recording or a series of video image frames transmitted in real time is always in the center of a picture (or as much as possible in the picture center), and because the wide-angle camera can cover a wider visual field, a photographer does not need to move the device in order to keep up with the movement of the shot target object (such as the person object), and video picture jitters caused by the mobile device are correspondingly avoided.
In an embodiment, referring to fig. 6 in combination, assuming that the width and the high resolution of the second drawing region are given and denoted as W2 and H2, respectively, and assuming that the proportionality coefficient Kw of the width-direction resolution of the first drawing region to the width-direction resolution of the second drawing region and the proportionality coefficient Kh of the height-direction resolution of the first drawing region to the height-direction resolution of the second drawing region are given, the original output resolution of the first camera such as the wide-angle camera is required to be:
W1=W2*Kw;
H1=H2*Kh。
w1 and H1 represent the width and high resolution of the first drawing region, respectively.
Taking the case that the resolution of the video image frame of the video file to be output when recording video is FHD (Full High Definition ) resolution, i.e., 1920 (W2) x 1080 (H2), and the proportionality coefficients Kw and Kh are both 5/3, then:
W1=1920*5/3=3200;
H1=1080*5/3=1800。
that is, the first camera, such as the wide-angle camera, is required to have a wide and high resolution of 3200 and 1800 as the original output resolution, so that the resolution of the video image frame of the recorded video is ensured to be correct, and the user's look and feel will not be blurred due to the processing of clipping and scaling the picture based on the first drawing region and the second drawing region.
In an embodiment, referring to the flowchart of the processing method provided in fig. 7, the processing method disclosed in the present application may further include:
step 701, determining a target distance between a target object and a device plane of a device where the first camera is located.
Specifically, according to the corresponding ranging technology, the distance between the target object and the equipment plane of the equipment where the first camera is located can be detected, and the target distance is obtained. The ranging technique employed may be, but is not limited to, time of flight (TOF) or the like.
Step 702, if the target distance meets the distance condition, performing zooming processing on the first camera according to the target distance.
The distance condition is a condition for representing that the distance between the object and the device plane of the device where the first camera is located is too close or too far to enable the first camera to clearly image the object. Specifically, but not limited to, the following are provided: the distance between the object and the equipment plane of the equipment where the first camera is located is larger than a preset first threshold value or smaller than a preset second threshold value, wherein the second threshold value is smaller than the first threshold value.
Under the condition that the target distance meets the distance condition, zooming is carried out on the first camera, the distance between the target object and the equipment plane is automatically judged based on the distance condition according to the change of the distance between the target object and the electronic equipment, and under the condition that the distance condition is judged to be met, the focal length of the first camera is adjusted to be a proper distance according to the distance between the target object and the equipment plane, so that clear imaging of the target object can be realized without manual zooming operation of a user, and meanwhile, camera shake and picture jumping feeling during manual zooming are avoided, so that the shooting effect is better.
Corresponding to the above processing method, the embodiment of the present application further discloses a processing device, where the composition structure of the device is shown in fig. 8, and specifically includes:
a first creating module 801, configured to create a first drawing area based on a tuned-on first camera, where the first drawing area is used to correspond to acquired information sensed by the first camera in real time;
a second creating module 802, configured to create a second drawing area based on the output requirement, where the second drawing area is used to correspond to the output information in real time;
an analysis module 803, configured to analyze the acquired information to obtain a target object included in the acquired information;
a first determining module 804, configured to determine a target position of the target object corresponding to the first drawing area;
and a second determining module 805, configured to determine, according to the target position and the second drawing area, target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area, where the target area has a size identical to that of the second drawing area.
In an embodiment, the first creating module 801 is specifically configured to: creating a first drawing area based on the resolution of the acquired information of the first camera;
the second creation module 802 is specifically configured to: creating a second drawing region based on the target resolution limited by the output requirement;
the target resolution is smaller than the resolution of the acquired information of the first camera.
In one embodiment, the target resolution is the highest resolution of the display screen, or the resolution of the video file selected to be created by the user, or the resolution limited by real-time video transmission.
In an embodiment, the first determining module 804 is specifically configured to: identifying a target object with corresponding object characteristics in the first drawing area by utilizing an artificial intelligence algorithm; and extracting the position information corresponding to the identified target object in the first drawing area.
In an embodiment, the first drawing area includes a center area and an edge area;
the second determining module 805 is specifically configured to, when determining the target area,:
if the target position belongs to the central area of the first drawing area, determining the target area according to a central principle; and if the target position belongs to the edge area of the first drawing area, determining the target area according to an edge principle.
In one embodiment, the center area of the first drawing area is: the dot concentration corresponding to the first drawing region enables the region which takes the self dot as the center and has the size of the second drawing region to be contained in the region formed by each dot in the first drawing region; the edge area of the first drawing area is an area except the center area in the first drawing area;
the second determining module 805 is specifically configured to, when determining the target area according to the center principle:
determining a region which takes the target position as a center and has the size of the second drawing region in the first drawing region as a target region;
the second determining module 805 is specifically configured to, when determining the target area according to the edge principle:
and determining a region which has the size of the second drawing region and contains the target position and the center point of which meets the proximity condition with the target position from the first drawing region as a target region.
In an embodiment, the apparatus further includes:
the zooming processing module is used for: determining a target distance between a target object and a device plane of the device where the first camera is located; and if the determined target distance meets the distance condition, carrying out zooming processing on the first camera according to the target distance.
The processing apparatus disclosed in the embodiments of the present application corresponds to the processing method disclosed in each of the embodiments of the method, so that the description is relatively simple, and the relevant similarities are only required to refer to the description of the corresponding method embodiments, and are not described in detail herein.
The embodiment of the application also discloses an electronic device, the composition structure of which is shown in fig. 9, comprising:
a first camera 901;
a memory 902 for storing at least one set of instructions.
The set of instructions may be implemented in the form of a computer program.
The processor 903 is configured to implement the image processing method as disclosed in any of the above method embodiments by executing the instruction set in the memory.
The processor 903 may be a central processing unit (Central Processing Unit, CPU), application-specific integrated circuit (ASIC), digital Signal Processor (DSP), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), or other programmable logic device, etc.
In addition, the electronic device may include communication interfaces, communication buses, and the like. The memory, processor and communication interface communicate with each other via a communication bus.
The communication interface is used for communication between the electronic device and other devices. The communication bus may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus or the like, and may be classified as an address bus, a data bus, a control bus, or the like.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
For convenience of description, the above system or apparatus is described as being functionally divided into various modules or units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that relational terms such as first, second, third, fourth, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. A processing method is applied to electronic equipment with a camera, and comprises the following steps:
creating a first drawing area based on a turned-on first camera, wherein the first drawing area is used for corresponding to acquired information sensed by the first camera in real time;
creating a second drawing area based on the output requirement, wherein the second drawing area is used for outputting information correspondingly in real time; the size of the second drawing area is matched with the target resolution of the output information limited by the output requirement, wherein the view angle range of the first camera when the first camera collects the information is larger than the view angle range of the output information;
analyzing the acquired information to obtain a target object included in the acquired information;
determining a corresponding target position of the target object in the first drawing area;
and determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area according to the target position and the second drawing area, wherein the target area and the second drawing area have the same size.
2. The method according to claim 1, wherein:
the first camera based on the tuning creates a first drawing area, which comprises the following steps:
creating a first drawing area based on the resolution of the acquired information of the first camera;
the creating a second drawing area based on the output requirement includes:
creating a second drawing region based on the target resolution limited by the output requirement;
the target resolution is smaller than the resolution of the acquired information of the first camera.
3. The method of claim 2, the target resolution being a highest resolution of a display screen, or a resolution of a video file selected for creation by a user, or a resolution limited by real-time video transmission.
4. The method of claim 1, the determining a corresponding target position of the target object in the first rendering region, comprising:
identifying a target object with corresponding object characteristics in the first drawing area by utilizing an artificial intelligence algorithm;
and extracting the position information corresponding to the identified target object in the first drawing area.
5. The method of claim 1, the first drawing region comprising a center region and an edge region;
the determining process of the target area comprises the following steps:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
6. The method of claim 5, wherein the central region is: the dot concentration corresponding to the first drawing region enables the region which takes the self dot as the center and has the size of the second drawing region to be contained in the region formed by each dot in the first drawing region; the edge region is a region other than the center region in the first drawing region;
the determining the target area according to the center principle comprises the following steps:
determining, as the target area, an area centered on the target position and having a size that is the size of the second drawing area in the first drawing area;
the determining the target area according to the edge principle comprises the following steps:
and determining a region with the size of the second drawing region, including the target position and having a region center point meeting a proximity condition with the target position, from the first drawing region as the target region.
7. The method of claim 1, further comprising:
determining a target distance between the target object and a device plane of the device where the first camera is located;
and if the target distance meets the distance condition, carrying out zooming processing on the first camera according to the target distance.
8. A processing device is applied to an electronic device with a camera, and comprises:
the first creating module is used for creating a first drawing area based on the started first camera, and the first drawing area is used for corresponding to the acquired information sensed by the first camera in real time;
the second creating module is used for creating a second drawing area based on the output requirement, and the second drawing area is used for corresponding to the output information in real time; the size of the second drawing area is matched with the target resolution of the output information limited by the output requirement, wherein the view angle range of the first camera when the first camera collects the information is larger than the view angle range of the output information;
the analysis module is used for analyzing the acquired information to obtain a target object included in the acquired information;
the first determining module is used for determining a corresponding target position of the target object in the first drawing area;
and the second determining module is used for determining target acquisition information of a target area including the target position in the first drawing area as acquisition information of the second drawing area according to the target position and the second drawing area, wherein the target area and the second drawing area have the same size.
9. The apparatus of claim 8, the first drawing region comprising a center region and an edge region;
the second determining module is specifically configured to, when determining the target area:
if the target position belongs to the central area, determining the target area according to a central principle;
and if the target position belongs to the edge area, determining the target area according to an edge principle.
10. An electronic device comprising at least:
a first camera;
a memory for storing a set of computer instructions;
a processor for implementing the processing method according to any of claims 1-7 by executing a set of instructions stored on a memory.
CN202111043677.XA 2021-09-07 2021-09-07 Processing method and device and electronic equipment Active CN113691731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111043677.XA CN113691731B (en) 2021-09-07 2021-09-07 Processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111043677.XA CN113691731B (en) 2021-09-07 2021-09-07 Processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113691731A CN113691731A (en) 2021-11-23
CN113691731B true CN113691731B (en) 2023-06-23

Family

ID=78585469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111043677.XA Active CN113691731B (en) 2021-09-07 2021-09-07 Processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113691731B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN110191369A (en) * 2019-06-06 2019-08-30 广州酷狗计算机科技有限公司 Image interception method, apparatus, equipment and storage medium
CN111524145A (en) * 2020-04-13 2020-08-11 北京智慧章鱼科技有限公司 Intelligent picture clipping method and system, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005084980A (en) * 2003-09-09 2005-03-31 Fuji Photo Film Co Ltd Data generation unit for card with face image, method and program
CN104952027A (en) * 2014-10-11 2015-09-30 腾讯科技(北京)有限公司 Face-information-contained picture cutting method and apparatus
CN112700454A (en) * 2020-12-28 2021-04-23 北京达佳互联信息技术有限公司 Image cropping method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN110191369A (en) * 2019-06-06 2019-08-30 广州酷狗计算机科技有限公司 Image interception method, apparatus, equipment and storage medium
CN111524145A (en) * 2020-04-13 2020-08-11 北京智慧章鱼科技有限公司 Intelligent picture clipping method and system, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113691731A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
US8810673B2 (en) Composition determination device, composition determination method, and program
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
CN110650291B (en) Target focus tracking method and device, electronic equipment and computer readable storage medium
US10701281B2 (en) Image processing apparatus, solid-state imaging device, and electronic apparatus
JP2008141437A (en) Imaging apparatus, image processor, image processing method for them, and program to make computer execute its method
CN113973190A (en) Video virtual background image processing method and device and computer equipment
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
KR20100030596A (en) Image capturing apparatus, method of determining presence or absence of image area, and recording medium
CN113766125A (en) Focusing method and device, electronic equipment and computer readable storage medium
CN111614867B (en) Video denoising method and device, mobile terminal and storage medium
CN110493512B (en) Photographic composition method, photographic composition device, photographic equipment, electronic device and storage medium
WO2020171379A1 (en) Capturing a photo using a mobile device
JP2007258869A (en) Image trimming method and apparatus, and program
CN114390201A (en) Focusing method and device thereof
CN112036311A (en) Image processing method and device based on eye state detection and storage medium
JP2018081402A (en) Image processing system, image processing method, and program
CN109981967B (en) Shooting method and device for intelligent robot, terminal equipment and medium
JP2010141847A (en) Image processor and method of processing image
US9686470B2 (en) Scene stability detection
CN113691731B (en) Processing method and device and electronic equipment
CN116958795A (en) Method and device for identifying flip image, electronic equipment and storage medium
KR102580281B1 (en) Related object detection method and device
JP2013179614A (en) Imaging apparatus
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN112565586A (en) Automatic focusing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant