CN117714840A - Image processing method, device, chip, electronic equipment and medium - Google Patents

Image processing method, device, chip, electronic equipment and medium Download PDF

Info

Publication number
CN117714840A
CN117714840A CN202311037882.4A CN202311037882A CN117714840A CN 117714840 A CN117714840 A CN 117714840A CN 202311037882 A CN202311037882 A CN 202311037882A CN 117714840 A CN117714840 A CN 117714840A
Authority
CN
China
Prior art keywords
image
main body
module
acquiring
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311037882.4A
Other languages
Chinese (zh)
Inventor
陈楷文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311037882.4A priority Critical patent/CN117714840A/en
Publication of CN117714840A publication Critical patent/CN117714840A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application provides an image processing method, an image processing device, a chip, electronic equipment and a medium, wherein the method comprises the following steps: acquiring a first image acquired by a shooting module; obtaining subject detection data of a first subject according to the first image; acquiring the multiplying power of the first view angle according to the multiplying power of the view angle used by the shooting module, the main body detection data of the first main body and the set image display requirement; acquiring a second image according to the magnification of the first view angle, wherein the second image is a part of the first image corresponding to the main body detection data of the first main body and the first view angle, and the second image meets the image display requirement; and outputting the second image to the display module. According to the embodiment of the application, the problems that manual zooming is needed and a main body is easy to lose can be solved, so that the image display effect is improved, and the shooting experience of a user is improved.

Description

Image processing method, device, chip, electronic equipment and medium
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to an image processing method, an image processing device, a chip, an electronic device, and a medium.
Background
In order to achieve the desired image display effect (for example, the main body is located as far as possible in the center of the display image and the occupied ratio of the main body in the display image is large), the user can manually zoom to adjust the image zoom ratio of the shooting module when shooting the moving main body, and can manually adjust the position and the orientation of the shooting module.
However, the movement of the main body mostly has uncertainty and flexibility, so that the user needs to zoom frequently and manually when shooting the moving main body, and the main body is also easy to be partially or completely lost from the display image when being enlarged, so that the image display effect is poor, and the shooting experience of the user is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a chip, electronic equipment and a medium, which can solve the problems that manual zooming is needed and a main body is easy to lose, so that an image display effect is improved, and shooting experience of a user is improved.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring a first image acquired by a shooting module; obtaining subject detection data of a first subject according to the first image; acquiring the multiplying power of the first view angle according to the multiplying power of the view angle used by the shooting module, the main body detection data of the first main body and the set image display requirement; acquiring a second image according to the magnification of the first view angle, wherein the second image is a part of the first image corresponding to the main body detection data of the first main body and the first view angle, and the second image meets the image display requirement; and outputting the second image to the display module.
Based on the position movement of the focus tracking main body, the position and the size of the main body in the image acquired by the shooting module in real time are not fixed. According to the method and the device for displaying the image, the main body detection is carried out on the initial image acquired by the shooting module, the part of the image corresponding to the focus tracking main body in the initial image is intercepted based on the main body detection result for sending and displaying, so that the expected image display effect can be achieved, the problem that the main body in the display image is lost can be avoided, the image display effect is good, the user does not need to manually zoom in and out in the implementation process, and the user shooting experience is good.
Optionally, the image processing method further includes: acquiring first information, wherein the first information is used for describing the position relationship between the boundary of the first main body and the first image; under the condition that the first information does not meet the set position relation requirement, controlling the deflection angle of the prism to change; the change of the image acquisition range of the shooting module is related to the change of the deflection angle of the prism.
After the main body detection is completed, the position relation between the focus tracking main body and the boundary of the initial image can be obtained, the position relation can be used for reflecting whether the main body has the possibility of waiting for frame output (namely, the main body is about to be partially lost in the initial image), and if the main body exists, the prism can be controlled to correspondingly deflect for compensation, so that the condition that the main body is out of frame in the initial image acquired by the shooting module in real time can be ensured, and the display image is supported to accord with expectations.
Optionally, the subject detection data of the first subject includes a subject detection size of the first subject; the image processing method further includes: acquiring the minimum distance between the boundary corresponding to the first main body and the boundary of the first image in the first direction, and acquiring the minimum distance between the boundary corresponding to the first main body and the boundary of the first image in the second direction; the boundary corresponding to the first main body is the boundary of the second image or the boundary of the main body detection size of the first main body, and the first direction and the second direction are perpendicular; the case that the first information does not meet the requirement of the position relation includes: the minimum pitch in the first direction is not greater than any one of the first pitch threshold value and the minimum pitch in the second direction is not greater than the second pitch threshold value.
The possibility of whether the main body has a frame to be taken out in two directions can be reflected by the minimum distance between the corresponding boundary of the focus tracking main body and the boundary of the initial image, so that the compensation can be carried out on demand in the corresponding directions in the follow-up process.
Optionally, controlling the deflection angle of the prism to change when the first information does not meet the set positional relationship requirement includes: if the minimum distance in the first direction is not greater than a first distance threshold, acquiring a first deflection angle adjustment amount corresponding to the minimum distance in the first direction according to a first preset corresponding relation between the boundary distance and the deflection angle adjustment amount, and adjusting the deflection angle of the prism in the first direction according to the first deflection angle adjustment amount; and if the minimum distance in the second direction is not greater than the second distance threshold, acquiring a second deflection angle adjustment amount corresponding to the minimum distance in the second direction according to a second preset corresponding relation between the boundary distance and the deflection angle adjustment amount, and adjusting the deflection angle of the prism in the second direction according to the second deflection angle adjustment amount.
The deflection angles of the prism in two directions can be respectively adjusted through a preset corresponding relation, so that the accurate adjustment of the deflection angles of the prism can be realized.
Optionally, obtaining subject detection data of the first subject from the first image includes: performing equal-proportion reduction processing on the first image to obtain a third image; and performing main body detection on the third image according to the pre-acquired main body identification data of the first main body to obtain main body detection data of the first main body.
The main body detection is carried out after the initial image is subjected to the equal-proportion reduction treatment, so that the detection accuracy can be ensured, the detection efficiency can be improved, and the real-time performance of image display can be ensured.
Optionally, before the first image acquired by the shooting module is acquired, the image processing method further includes: responding to the main body selection operation of the image displayed by the display module, taking the selected main body as a first main body, and acquiring main body identification data of the first main body; obtaining subject detection data of a first subject from the first image, comprising: and performing body detection according to the body identification data of the first body and the first image to obtain body detection data of the first body.
During the image display period, the user can select the focus tracking main body as required, and then the image mainly comprising the focus tracking main body can be displayed for the user to view based on the focus tracking main body selected by the user.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the acquisition module is used for acquiring the first image acquired by the shooting module; the sensing module is used for obtaining main body detection data of the first main body according to the first image; the decision module is used for acquiring the multiplying power of the first view angle according to the multiplying power of the view angle used by the shooting module, the main body detection data of the first main body and the set image display requirement; the processing module is used for acquiring a second image according to the multiplying power of the first view angle, wherein the second image is a part of the first image corresponding to the main body detection data of the first main body and the first view angle, and the second image meets the image display requirement; and the sending and displaying module is used for outputting the second image to the display module.
In a third aspect, an embodiment of the present application provides an electronic chip, including: a processor for executing computer program instructions stored on a memory, wherein the computer program instructions, when executed by the processor, trigger the electronic chip to perform the method according to any of the first aspects.
In a fourth aspect, embodiments of the present application provide an electronic device comprising one or more memories for storing computer program instructions, and one or more processors, wherein the computer program instructions, when executed by the one or more processors, trigger the electronic device to perform a method as in any of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to perform the method as in any one of the first aspects.
In a sixth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method as in any of the first aspects.
The technical effects of the foregoing aspects may be referred to each other, and will not be described herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic diagram for representing a display image at different angles of view;
FIG. 3 is a schematic diagram of a display image;
FIG. 4 is a schematic diagram of another display image;
fig. 5 is a schematic diagram of a display image in a scene according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a display image in another scenario provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of an image acquired at a prism deflection angle according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an image acquired at another prism deflection angle according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of an image processing system for implementing image processing according to an embodiment of the present application;
fig. 10 is a flowchart of an image processing method according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, based on the embodiments herein, which would be apparent to one of ordinary skill in the art without making any inventive effort, are intended to be within the scope of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "at least one" as used herein means one or more, and "a plurality" means two or more. The term "and/or" as used herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. Wherein A, B may be singular or plural. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that although the terms first, second, etc. may be used in embodiments of the present application to describe the set threshold values, these set threshold values should not be limited to these terms. These terms are only used to distinguish the set thresholds from each other. For example, a first set threshold may also be referred to as a second set threshold, and similarly, a second set threshold may also be referred to as a first set threshold, without departing from the scope of embodiments of the present application.
The image processing method provided in any of the embodiments of the present application may be applied to the electronic device 100 shown in fig. 1. Fig. 1 shows a schematic configuration of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a camera 193, a display 194, and the like. Wherein the sensor module 180 may include a touch sensor, an ambient light sensor, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
In some embodiments, the processor 110 may be a System On Chip (SOC), and the processor 110 may include a central processing unit (Central Processing Unit, CPU) and may further include other types of processors. In some embodiments, the processor 110 may be a PWM control chip.
The processor 110 may also include the necessary hardware accelerators or logic processing hardware circuitry, such as an ASIC, or one or more integrated circuits for controlling the execution of a technical program, etc. Further, the processor 110 may have a function of operating one or more software programs, which may be stored in a storage medium.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the memory of electronic device 100 may be read-only memory (ROM), other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM), or other types of dynamic storage devices that can store information and instructions, electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), or any computer-readable medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In some embodiments, the processor 110 and the memory may be combined into a single processing device, or may be separate components, and the processor 110 may be configured to execute program code stored in the memory. In particular implementations, the memory may also be integrated into the processor 110 or may be separate from the processor 110.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The processor 110 may perform image processing on the image captured by the camera 193 to obtain a display image, and output the display image to the display screen 194 for image display.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The ambient light sensor is used for sensing ambient light brightness. The camera 193 can take images in combination with ambient light, such as light replenishment as needed, etc.
Touch sensors, also known as "touch devices". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also referred to as a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor may also be disposed on a surface of the electronic device 100 at a different location than the display 194. For example, while the touch screen displays the preview image or captures the image, the user may click on the touch screen to select the focus subject.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
In order to achieve a desired image display effect (for example, the photographed subject is located as far as possible in the center of the display image and the occupied ratio of the photographed subject in the display image is large, etc.), the user may manually zoom to adjust the image zoom (or the magnification of the field angle) of the photographing module when photographing the moving subject, and may manually adjust the position and orientation of the photographing module. .
However, the movement of the subject has uncertainty and flexibility, so that the user needs to zoom frequently and manually when shooting the moving subject, and the subject is also easy to be partially or completely lost from the displayed image when being enlarged (or referred to as a subject frame out condition), so that the image display effect is poor, and the shooting experience of the user is poor.
For example, when the photographed subject is far from the user, the user may manually zoom in the image to photograph the subject using a larger magnification angle of view (e.g., 4×fov) to avoid the subject being too small in the displayed image, while when the subject is close to the user, the user may manually zoom out the image to photograph the subject using a smaller magnification angle of view (e.g., 1×fov) to avoid the subject being too large in the displayed image and the subject being displayed incompletely. Based on the distance change of the main body position, the user needs frequent manual zooming, and the shooting experience of the user is poor. Furthermore, the image scaling used by the user is often difficult to control accurately and is prone to hysteresis, resulting in poor image display. Wherein FOV (Field of view) denotes the angle of view.
Referring to fig. 2, the image capturing scene may be a basketball court scene shown in fig. 2, a boundary line of the basketball court is shown as reference numeral 201, a boundary line of a buffer area outside the basketball court is shown as reference numeral 202, and the main body 203 is assumed to be a focus tracking main body selected by a user. By selecting a focus-following subject, the sharpness of an image at the subject can be made higher than the sharpness of images at other positions in the captured and displayed image.
In the case where other factors (such as the position, orientation, etc. of the photographing module) remain unchanged, if the user uses a field angle of 1×fov to photograph the image, the image collected by the photographing module may be the image indicated by reference numeral 204, and if the user uses a field angle of 4×fov to photograph the image, the image collected by the photographing module may be the image indicated by reference numeral 205.
If the required image display effect includes that the focus tracking subject occupies a relatively large area in the display image, the user can manually zoom to adjust to 4×fov for image capturing when capturing the subject 203 in the scene shown in fig. 2, and the captured and displayed image can be the image shown by reference numeral 205.
For another example, when the main body moves toward one direction, the user can manually adjust the position and/or orientation of the photographing module to photograph toward the one direction, and when the main body moves toward the other direction, the user can again adjust the position and/or orientation of the photographing module to photograph toward the new direction. However, when a user shoots a main body by using a field angle with a larger multiplying power, the image acquisition range of the shooting module is sensitive to position movement, so that the main body is easy to get out of the frame when the main body is partially or even completely lost in the display image. Furthermore, the user holds the shooting module to shake the hands when shooting, which is easy to cause the situation that the main body is out of the frame.
For example, one display image in the case where the subject is not framed may be as shown in fig. 3, one display image in the case where the subject is framed may be as shown in fig. 4, and a portion shown by a broken line in fig. 4 is out of the display image, that is, an image of a portion shown by a broken line is not photographed and is not displayed.
In order to solve the problem that a user needs manual zooming when shooting a focus tracking main body and the main body is easy to go out of frame in the manual zooming process, so that the focus tracking effect is poor, referring to fig. 10, an embodiment of the present application provides an image processing method, which may include the following steps 1001 to 1005. The method can be suitable for handheld shooting scenes and foot rest shooting scenes.
Under the hand-held shooting scene, a user takes a picture or records a video by holding the shooting module. Based on the position change of the hands of the user, the position and/or the orientation of the shooting module can be changed correspondingly, and the user can also zoom manually according to the needs, so that the image acquisition range of the shooting module is affected.
Under the foot rest shooting scene, a user places the shooting module on the foot rest to shoot or record a video, and the position and the orientation of the shooting module can be kept unchanged, so that adverse effects of hand shake of the user on image acquisition during handheld shooting are solved. The user can zoom manually as required under the foot rest shooting scene to adjust the image acquisition range of shooting module.
In step 1001, a first image acquired by the capturing module is acquired.
In one embodiment, the shooting module may be a camera in a mobile phone.
The shooting module can acquire images in real time. Each image acquired by the shooting module in real time can be used as a first image respectively, and the image processing method shown in fig. 10 is executed.
The first image may be an initial image acquired by the shooting module, and the initial image is not directly sent to be displayed, that is, is not directly displayed as a shot image.
Referring to fig. 2, when the shooting module is in the basketball court scene shown in fig. 2 and the used field angle is 1×fov, the collected first image may be the image shown by the reference numeral 204 in fig. 2, and when the used field angle is 4×fov, the collected first image may be the image shown by the reference numeral 205 in fig. 2.
In one embodiment, the field angle magnification used by the shooting module may be limited to be no greater than a set field angle magnification (such as 1×, 2×, etc.), so that the shooting module may use a smaller field angle (such as 1×fov, 2×fov, etc.) to acquire the first image, so as to ensure that the image of the focus-chasing subject may be generally in the first image acquired in real time, thereby supporting that the subsequent image of the portion of the focus-chasing subject may be taken from the first image for display.
Referring to fig. 5, the image capturing scene may also be a basketball court scene shown in fig. 5, where a boundary line of the basketball court is shown as reference numeral 501, and a boundary line of a buffer area outside the basketball court is shown as reference numeral 502, and it is assumed that a main body 503 is a focus tracking main body selected by a user. When the shooting module performs image acquisition using 1×fov in the basketball court scene shown in fig. 5, the acquired first image may be the image shown by reference numeral 504 in fig. 5. As shown in fig. 5, the acquired first image includes an image of the subject 503, and subject detection of the focus-following subject can be performed based on the first image.
Referring to fig. 6, the image capturing scene may also be a basketball court scene shown in fig. 6, wherein a boundary line of the basketball court is shown as reference numeral 601, a boundary line of a buffer area outside the basketball court is shown as reference numeral 602, and the main body 603 is assumed to be a focus tracking main body selected by a user. When the shooting module performs image acquisition using 1×fov in the basketball court scene shown in fig. 6, the acquired first image may be the image shown by reference numeral 604 in fig. 6. As shown in fig. 6, the acquired first image includes an image of the subject 603, and subject detection of the focus-following subject can be performed based on the first image.
In step 1002, subject detection data of a first subject is obtained from a first image.
By way of example, the first subject may be a human, animal, plant, or any other type of photographable object.
The first main body can be any main body which can be shot by the shooting module. Taking the example of shooting a player on a basketball court, the first body may be any player on the basketball court.
When the position or orientation of the photographing module is not greatly moved, the first image generally includes a part or all of the first subject, and subject detection data of the first subject can be obtained by performing subject detection based on the first image. If the position and/or orientation of the photographing module is moved substantially, the main body detection data of the first main body may not be obtained according to the first image, or the main body detection data of the first main body is considered to be empty, in one possible implementation manner in this case, the first image may be sent and displayed.
In one embodiment, the subject detection data may include a subject position and a subject size. For example, the body position may be a center point position of the body. For example, the body size may be a size of a smallest rectangle capable of containing the body, such as the body size may be represented by coordinate values of two diagonal points of the smallest rectangle.
Based on the requirement of the image display requirement for the position of the subject in the image (e.g., the subject may be required to be as centered as possible in the displayed image), the subject detection data includes a subject position that can be used to determine the position of the display image (i.e., the second image described below) in the first image.
Based on the requirement of the main body on the ratio of the main body in the image display requirement (for example, the ratio of the main body in the display image can be required to be not lower than the preset ratio), the main body size included in the main body detection data can be used for determining the size of the sent display image (namely, the second image which is described below), namely, determining the value of the multiplying power of the first field angle which is described below.
In one embodiment, the first principal may be a pre-selected focus chasing principal for the user.
In one embodiment, the subject identification data of the first subject may be acquired in advance, and subject detection may be performed in combination with the first image according to the subject identification data, thereby obtaining subject detection data. For example, whether the first subject exists in the first image may be detected according to the subject identification data, and if so, the position and the size of the first subject in the first image may be acquired as the subject detection data of the first subject.
In an embodiment of the present application, before the first image acquired by the capturing module is acquired, the image processing method may further include: and responding to the main body selection operation of the image displayed by the display module, taking the selected main body as a first main body, and acquiring main body identification data of the first main body. Based on this, step 1002 may include: and performing body detection according to the body identification data of the first body and the first image to obtain body detection data of the first body.
In one possible implementation, during the display of the image, if the user needs to focus on the interested subject, the display module may click on the subject to perform the subject selection operation.
In one embodiment, the display image period may be an image preview display period. In another embodiment, the period of displaying the image may be an image capturing period.
During the image display period, the user can select the focus tracking main body as required, and then the image mainly comprising the focus tracking main body can be displayed for the user to view based on the focus tracking main body selected by the user.
In one possible implementation, the subject identification data may include facial features and body features of the subject. For example, when the focus tracking subject is a human, the subject identification data may include a face feature and a body feature of the human.
Taking the first body as the body 503 shown in fig. 5 as an example, whether the body 503 exists in the first image (i.e., the image shown by reference numeral 504) can be detected based on the body identification data of the body 503, and after the existence is detected, the body position and the body size of the body 503 in the first image can be obtained as the detected body detection data.
In one embodiment of the present application, step 1002 may include: performing equal-proportion reduction processing on the first image to obtain a third image; and performing main body detection on the third image according to the pre-acquired main body identification data of the first main body to obtain main body detection data of the first main body.
The size of the initial image collected by the photographing module is usually larger, for example, the size can be 3000×2000 (the unit is pixels), namely 600 ten thousand pixels, and if the original image is directly used for main body detection, the energy and time are consumed, so that the original image can be reduced in an equal proportion, and main body detection is performed based on the reduced image.
In a possible implementation, a plurality of (e.g., 4, 16, etc.) pixels may be collectively referred to as 1 pixel, and the size of the resulting image after the pixel synthesis process may be reduced, e.g., 600×400 (in pixels), but the image content remains unchanged to support accurate execution of the subject detection.
The main body detection is carried out after the initial image is subjected to the equal-proportion reduction treatment, so that the detection accuracy can be ensured, the detection efficiency can be improved, and the real-time performance of image display can be ensured.
Step 1003, obtaining the magnification of the first view angle according to the magnification of the view angle used by the shooting module to collect the first image, the main body detection data of the first main body and the set image display requirement.
Magnification at angle of view, or optical magnification, zoom magnification (zoom ratio), referred to as angle of view.
In one embodiment, the set image display requirement may include the first subject's duty cycle in the presented image not being below a set duty cycle threshold. The set duty ratio threshold may be set to values of 30%, 40%, 60%, etc., as needed. Referring to the image (1×fov) indicated by reference numeral 204 and the image (4×fov) indicated by reference numeral 205 in fig. 2, the larger the magnification of the field angle, the larger the subject's duty ratio in the image, whereas the smaller the magnification of the field angle, the smaller the subject's duty ratio in the image.
In one embodiment, the set image display requirement may further include that the portion of the first subject in the display image corresponds to the portion of the first subject in the first image, so as to avoid the problem of subject missing.
In order to avoid the problem of main body missing, based on the limitation of the set duty ratio threshold, the magnification of the first field angle corresponding to the sent image (the shooting module can collect the image content in the sent image under the first field angle) can be determined according to the magnification of the field angle corresponding to the first image (the shooting module can collect the image content in the sent image under the first field angle).
The image capturing ranges at different angles of view are correspondingly different, for example, please refer to fig. 5, and the image capturing range (the image range of the image shown by reference numeral 504) at the first image corresponding to the angle of view is larger than the image capturing range (the image range of the image shown by reference numeral 505) at the first angle of view.
Referring to fig. 5, in the image capturing scenario shown in fig. 5, the first image may be an image indicated by a reference numeral 504 corresponding to 1×fov, the first field angle may be 4×fov, and the display image may be an image indicated by a reference numeral 505 corresponding to 4×fov (i.e., the second image). As shown in fig. 5, the second image is a part of the first image, the occupation of the main body 503 in the second image is relatively large, and the main body 503 is not missing in the second image compared to the first image.
Referring to fig. 6, in the image capturing scenario shown in fig. 6, the first image may be an image indicated by reference numeral 604 corresponding to 1×fov, the first field angle may be 2.5×fov, the sending image may be an image indicated by reference numeral 605 corresponding to 2.5×fov (i.e. the second image), as shown in fig. 6, the second image is a part of the first image, the main body 603 occupies a larger portion of the second image, and compared to the first image, the main body 603 is not missing in the second image.
Based on the multiplying power of the angle of view corresponding to the first image, the main body detection data of the first main body and the set image display requirement, the multiplying power of the first angle of view can be determined, so that when the shooting module collects images at the first angle of view, the collected images can meet the image display requirement.
In one embodiment, the set image display requirement may include a centered position of the first subject in the display image if the subject centered display condition is satisfied. If the distance between the center of the first main body and the center of the displayed image is not greater than the preset interval threshold, the first main body can be considered to be positioned at the center of the displayed image.
In one possible implementation manner, the boundary of the second image may be obtained by taking the position of the first main body as the center and combining the magnification of the first field angle, if the boundary of the second image is in the first image, the main body centering display condition is satisfied, otherwise, the main body centering display condition is not satisfied. If the subject centered display condition is not satisfied, the first subject may be located at a non-centered position in the displayed image.
For example, if the subject is partially framed in the first image, the subject centered display condition is not established. If the subject approaches the boundary of the first image in the first image, there is a high possibility that the subject centering display condition is not established.
Step 1004, obtaining a second image according to the magnification of the first angle of view, wherein the second image is a part of the first image corresponding to the subject detection data of the first subject and the first angle of view, and the second image meets the image display requirement.
The second image can be directly acquired according to the first image, the second image is acquired without using the first angle of view for image acquisition, namely, the second image can be acquired without adjusting the angle of view multiplying power, and the second image meeting the image display requirement can be quickly and accurately acquired for display.
The magnification of the first field angle is not smaller than the magnification of the field angle corresponding to the first image. And if the two multiplying powers are equal, the second image is the first image. If the two magnifications are different, the second image is a part of the first image. For example, referring to fig. 5, the magnification of the first angle of view is 4×, and the magnification of the angle of view corresponding to the first image is 1×.
After the magnification of the first angle of view is determined, a second image may be acquired according to the first image, the subject detection data of the first subject, and the image size corresponding to the first angle of view. And the part of the image corresponding to the main body in the first image is taken for display, so that the display image meeting the image display requirement can be obtained without manual scaling of a user, and the display image can not have the main body frame-out condition.
In one embodiment, the first image may be acquired using a smaller magnification field angle (e.g., 1 FOV, 2 FOV) to increase the probability that an image of the focus subject is present in the first image and the focus subject is not lost in the first image.
In one embodiment of the present application, the first image may be cut according to the image size corresponding to the first field angle, and in combination with the position of the first subject in the first image and the image display requirement, to obtain the second image. Wherein the first field of view angle may affect the duty cycle of the second image in the first image and the subject position of the first subject may affect the position of the second image in the first image.
In one embodiment, referring to fig. 5, the first image is an image indicated by reference numeral 504 corresponding to 1×fov, the second image is an image indicated by reference numeral 505 corresponding to 4×fov, the main body 503 is at a central position of the second image, and the main body 503 occupies a relatively large area in the second image and is not missing.
In another embodiment, please refer to fig. 6, the first image is an image indicated by a reference numeral 604 corresponding to 1×fov, the second image is an image indicated by a reference numeral 605 corresponding to 2.5×fov, the main body 603 is located in the middle of the second image, and the main body 603 occupies a relatively large area in the second image and is not missing.
Step 1005, outputting the second image to the display module.
After the second image meeting the image display requirement is obtained, the second image can be sent and displayed as a shot image, so that a user can view the image content of the second image, and the displayed image meets the image display requirement. Thus, the size of the focus tracking main body on the display screen can be kept unchanged basically by respectively carrying out the image processing flow shown in fig. 10 on the images acquired by the shooting module in real time.
In one embodiment, referring to fig. 5, after the image indicated by reference numeral 505 is displayed, the display module may display the image indicated by reference numeral 506, and the image size of the image indicated by reference numeral 506 matches the display screen size of the display module.
In another embodiment, referring to fig. 6, after the image indicated by reference numeral 605 is displayed, the display module may display the image indicated by reference numeral 606, and the image size of the image indicated by reference numeral 606 matches the display screen size of the display module.
In one embodiment, when the display module performs image display according to the sent image, the display module may be in a preview display state, and the displayed image may be a preview image. In the preview display state, the user can take a picture or pick up a video as required later, and each frame of image in the video stream obtained by taking the picture or picking up the video can be an image meeting the image display requirement.
In another embodiment, when the display module performs image display according to the sent image, the display module may be in a shooting state, and the displayed image may be a shooting image.
Based on the position movement of the focus tracking main body, the position and the size of the main body in the image acquired by the shooting module in real time are not fixed. According to the method and the device for displaying the image, the main body detection is carried out on the initial image acquired by the shooting module, the part of the image corresponding to the focus tracking main body in the initial image is intercepted based on the main body detection result for sending and displaying, so that the expected image display effect can be achieved, the problem that the main body in the display image is lost can be avoided, the image display effect is good, the user does not need to manually zoom in and out in the implementation process, and the user shooting experience is good.
In one embodiment of the present application, the image processing method may further include: acquiring first information, wherein the first information is used for describing the position relationship between the boundary of the first main body and the first image; under the condition that the first information does not meet the set position relation requirement, controlling the deflection angle of the prism to change; the change of the image acquisition range of the shooting module is related to the change of the deflection angle of the prism.
In one embodiment, the set positional relationship is required to include, for example, that the first subject is farther from the boundary of the first image. By limiting the first main body to be far away from the boundary of the first image, the first main body can be arranged at the middle position of the first image, so that the situation that the main body is already or is about to go out of a frame can be avoided, and the second image meeting the image display requirement can be cut from the first image to be displayed.
After the main body detection is completed, the position relation between the focus tracking main body and the boundary of the initial image can be obtained, the position relation can be used for reflecting whether the main body has the possibility of waiting for frame output (namely, the main body is about to be partially lost in the initial image), and if the main body exists, the prism can be controlled to correspondingly deflect for compensation, so that the condition that the main body is out of frame in the initial image acquired by the shooting module in real time can be ensured, and the display image is supported to accord with expectations.
When the main body is out of the frame, the deflection of the prism is controlled to compensate, and the compensation is continuously performed according to the requirement, so that the possibility that the main body is out of the frame is increased, and support can be provided for capturing the display image meeting the image display requirement from the first image.
In an embodiment of the present application, referring to fig. 7, an optical lens 702 may be disposed on an optical path of an optical sensor 701 of the image capturing module (as shown in fig. 7, the optical lens 702 may be located in front of the optical sensor 701), a prism 703 is built in the optical lens 702, and the prism 703 may deflect in a specific direction under the control of a corresponding motor (not shown in fig. 7) to change a deflection angle of the prism 703.
In one embodiment, the prism 703 may be deflected in the direction of the longitudinal axis (up and down as shown in the schematic diagram of fig. 7), for example, to the position shown in fig. 8 by an angle α shown in fig. 8. Based on the deflection of the prism 703 in the longitudinal axis direction, an effect similar to that of the optical sensor 701 in which the head is raised or lowered to change the image capturing range accordingly can be achieved.
Referring to fig. 7, if the optical sensor 701 photographs the subject 704 while the prism 703 is not deflected, an image of a middle portion of the subject 704 (see a portion indicated by a solid line in an image indicated by reference numeral 705) may be photographed, but an image of upper and lower portions of the subject 704 (see a portion indicated by a broken line in an image indicated by reference numeral 705) may not be photographed.
Referring to fig. 8, when the prism 703 is deflected by an angle α as shown in fig. 8, if the optical sensor 701 photographs the subject 704, an upper portion image (see a solid line portion in an image shown by reference numeral 801) of the subject 704 can be photographed, and a lower portion image (see a broken line portion in an image shown by reference numeral 801) of the subject 704 cannot be photographed.
It can be seen that, based on the change of the deflection angle of the prism 703, the image capturing range of the optical sensor 701 can be correspondingly changed, so that the image content in the required range can still be captured under the condition that the position of the optical sensor 701 is unchanged.
In this way, when the position of the optical sensor 701 is fixed, the image acquisition range of the optical sensor 701 in the longitudinal direction can be enlarged by the deflection of the prism 703 in the longitudinal direction, and thus the problem of the frame out in the longitudinal direction of the main body can be solved. For example, when a certain boundary distance between the main body and the longitudinal axis direction is relatively short, the prism can be controlled to deflect at an angle to a corresponding degree in the longitudinal axis direction, so that the image acquisition range of the shooting module is changed towards the boundary direction, and therefore an initial image more centered in the main body can be acquired, and the main body is prevented from being out of the frame.
Whether the main body is out of the frame or not can be determined according to the distance between the main body and the boundary of the initial image, if so, the lens can be controlled to perform angle deflection in the corresponding direction and the corresponding degree so as to perform compensation, and the main body can be positioned at the central position of the image instead of the boundary position as much as possible in the initial image acquired after the angle compensation.
In another embodiment, the prism 703 may deflect in the transverse direction (in the out-of-plane direction of the paper as shown in the schematic diagram of FIG. 7). Based on the deflection of the prism 703 in the horizontal axis direction, an effect similar to the optical sensor 701 of panning left and right to change the image capturing range accordingly can be achieved. In this way, when the position of the optical sensor 701 is fixed, the image acquisition range of the optical sensor 701 in the horizontal axis direction can be widened by the deflection of the prism 703 in the horizontal axis direction, and thus the problem of the main body being out of frame in the horizontal direction can be solved.
In yet another embodiment, the prism 703 can deflect along the transverse axis direction and the longitudinal direction respectively, so that the image acquisition range of the optical sensor 701 in the transverse axis and the longitudinal direction can be enlarged, and the transverse and longitudinal frame-out problems of the main body can be solved.
By introducing a newly added compensation device comprising a prism and a prism related control component, the compensation can be performed when the main body is about to or has come out of the frame, so that the self-adaptive on-demand adjustment of the image acquisition range is realized, the position and the orientation of the camera module are not required to be manually changed by a user to adjust the image acquisition range, and the self-adaptive smooth scaling of the display image is facilitated.
Aiming at the handheld scene, when a user uses the shooting module in the handheld scene, the hand shake when the user holds the camera can be compensated by adding a compensation device, so that adverse effects of the hand shake on image display are weakened. In addition, based on the on-demand operation of the compensation device, the on-demand adjustment of the image acquisition range by the hand of the user can be replaced to a corresponding extent, and the adjustment effect is more accurate.
When a user uses the shooting module in the foot rest scene, the image acquisition range can be expanded in all directions through the newly added compensation device, so that the shooting module is equivalent to a larger image acquisition range, the effect of fully covering the expected shooting range is achieved, the occurrence of the main body frame outlet condition can be effectively avoided, and the stability of image display can be effectively ensured.
Referring to fig. 5, in the case where the position of the photographing module is fixed, if the prism compensation is not performed, the image capturing range of the photographing module may be the range covered by the image indicated by the reference numeral 504, and if the prism compensation is performed in the horizontal and vertical directions, the image capturing range of the photographing module may be the range indicated by the reference numeral 507.
In one possible implementation, deflection of the prism in the horizontal axis direction may cause the image acquisition range to move in the left-right direction of the schematic diagram shown in fig. 5, while deflection of the prism in the vertical axis direction may cause the image acquisition range to move in the up-down direction of the schematic diagram shown in fig. 5.
Referring to fig. 6, in the case where the position of the photographing module is fixed, if the prism compensation is not performed, the image capturing range of the photographing module may be the range covered by the image indicated by reference numeral 604, and if the prism compensation is performed in the horizontal and vertical directions, the image capturing range of the photographing module may be the range indicated by reference numeral 607.
In one embodiment of the present application, the subject detection data of the first subject includes a subject detection size of the first subject; the image processing method further includes: acquiring the minimum distance between the boundary corresponding to the first main body and the boundary of the first image in the first direction, and acquiring the minimum distance between the boundary corresponding to the first main body and the boundary of the first image in the second direction; the boundary corresponding to the first main body is the boundary of the second image or the boundary of the main body detection size of the first main body, and the first direction and the second direction are perpendicular; the case that the first information does not meet the requirement of the position relation includes: the case where any one of the minimum pitch in the first direction is not greater than (or smaller than) the first pitch threshold and the minimum pitch in the second direction is not greater than (or smaller than) the second pitch threshold is established.
The possibility of whether the main body has a frame to be taken out in two directions can be reflected by the minimum distance between the corresponding boundary of the focus tracking main body and the boundary of the initial image, so that the compensation can be carried out on demand in the corresponding directions in the follow-up process.
In one embodiment, the first direction and the second direction may be the above-mentioned lateral direction and longitudinal direction, respectively, the first direction being parallel to a portion of the boundary of the first image.
In one embodiment, the specific values of the first pitch threshold and the second pitch threshold may be equal.
In one embodiment, the specific values of the first pitch threshold and the second pitch threshold may not be equal. For example, taking a boundary corresponding to the first body as a boundary of the second image, the first pitch threshold may be a product of a boundary length of the first image in the first direction and a preset percentage, and the second pitch threshold may be a product of a boundary length of the first image in the second direction and the preset percentage. The preset percentage may be any of 5% to 10%, for example.
Considering that the larger the viewing angle magnification is, the smaller the image acquisition range is, in one embodiment, when the viewing angle magnification used for acquiring the first image is not larger (or smaller) than the set magnification, the boundary of the second image is taken as the boundary corresponding to the first main body, otherwise, the boundary of the main body detection size of the first main body is taken as the boundary corresponding to the first main body. Thus, accurate judgment of main body frame output can be realized.
The set magnification may be obtained by empirical values or by debugging, as applicable.
In one embodiment of the present application, when the first information does not meet the set positional relationship requirement, controlling the deflection angle of the prism to change includes: if the minimum distance in the first direction is not greater than a first distance threshold, acquiring a first deflection angle adjustment amount corresponding to the minimum distance in the first direction according to a first preset corresponding relation between the boundary distance and the deflection angle adjustment amount, and adjusting the deflection angle of the prism in the first direction according to the first deflection angle adjustment amount; and if the minimum distance in the second direction is not greater than the second distance threshold, acquiring a second deflection angle adjustment amount corresponding to the minimum distance in the second direction according to a second preset corresponding relation between the boundary distance and the deflection angle adjustment amount, and adjusting the deflection angle of the prism in the second direction according to the second deflection angle adjustment amount.
The deflection angles of the prism in two directions can be respectively adjusted through a preset corresponding relation, so that the accurate adjustment of the deflection angles of the prism can be realized.
In one embodiment, the first preset corresponding relationship and the second preset corresponding relationship may be calibrated based on a conversion relationship between the pitch and the deflection angle adjustment amount and in combination with an adjustment coefficient corresponding to the pitch. The adjustment coefficient may be used to adjust the amount of yaw angle adjustment converted from the pitch to increase, decrease, or maintain the converted amount of yaw angle adjustment.
In one possible implementation manner, the distance is inversely related to the corresponding adjustment coefficient, that is, the larger the distance is, the smaller the corresponding adjustment coefficient is, so that the main body performs light compensation when the frame-out condition is not serious, and the main body performs heavy compensation when the frame-out condition is serious. Therefore, the implementation of the compensated visibility can be realized, and the compensation effect is better.
Taking the preset percentage as 10% as an example, if the minimum distance in the first direction is between 5% and 10% of the boundary length of the first image in the first direction, the main body is not seriously subject to frame taking, and if the minimum distance in the first direction is between 0% and 5% of the boundary length of the first image in the first direction, the main body is seriously subject to frame taking.
In one possible implementation, the image processing module may send a deflection command carrying the deflection direction and the deflection angle adjustment amount to a motor controlling the deflection of the prism, where the motor executes the received deflection command to control the angular deflection of the prism in the corresponding direction.
Referring to fig. 9, an embodiment of the present application further provides an image processing system, which may include a data input module 91, a perception engine 92, a decision engine 93, an image processing module 94, and a display module 95.
The data input module 91 may include a sensor map acquisition module 911 and a body detection module 912.
The sensor map acquisition module 911 may be configured to acquire an initial image acquired by an optical sensor of the photographing module.
The main body detection module 912 may be configured to perform an equal-scale reduction process on the initial image acquired by the sensor map acquisition module 911, and perform main body detection on the processed image according to main body identification data of the focus tracking main body selected by the user, so as to obtain main body detection data of the focus tracking main body.
The perception engine 92 may include a subject detection data acquisition module 921, an image data acquisition module 922, and a camera configuration data acquisition module 923.
Wherein the subject detection data acquisition module 921 may be used to acquire subject detection data obtained by the subject detection module 912 via subject detection.
The image data acquisition module 922 may be used to acquire image base information (e.g., size, format, etc.) of the initial image acquired by the sensor map acquisition module 911.
The camera configuration data acquisition module 923 may be configured to acquire camera configuration data, which may include, for example, a field angle magnification used to acquire the initial image.
In one embodiment, the sensing engine 92 may further include a compensation module, where the compensation module may be configured to obtain, according to the main body detection data, a minimum distance between a boundary corresponding to the focus tracking main body and a boundary of the initial image in a direction of a horizontal axis and a vertical axis, and adjust, as needed, a deflection angle of the prism according to a comparison between the minimum distance and a corresponding distance threshold, so as to implement on-demand compensation for a main body frame-out situation.
In one possible implementation manner, when the deflection angle of the prism needs to be adjusted, the compensation module can issue a deflection angle adjustment instruction to a motor driver for controlling the deflection of the prism through a hardware abstraction (Hal, hardware abstraction layer) layer, the motor driver can convert the deflection angle adjustment instruction into a current value, and the current value is issued to the motor through an I2C interface, so that the motor controls the prism to deflect at a corresponding angle. .
The decision engine 93 may include a scale output module 931.
The zoom ratio output module 931 may be configured to obtain the field angle magnification corresponding to the sent image according to the subject detection data obtained by the subject detection data obtaining module 921, the image basic information obtained by the image data obtaining module 922, and the camera configuration data obtained by the camera configuration data obtaining module 923, in combination with the set image display requirement.
The image processing module 94 may include an image cutting module 941.
The image cutting module 941 may be configured to, according to an image size of a field angle magnification corresponding to the display image, and in combination with a position of the focus tracking main body in the initial image and a set image display requirement, call hardware to perform image cutting on the initial image to obtain the display image, and output the display image to the display module 95 for image display.
The initial image acquired in real time is cut to obtain the sending and displaying image meeting the image display requirement, and the image is displayed in real time according to the sending and displaying image, so that the image smoothing effect can be achieved.
An embodiment of the present application provides an image processing apparatus, including: the acquisition module is used for acquiring the first image acquired by the shooting module; the sensing module is used for obtaining main body detection data of the first main body according to the first image; the decision module is used for acquiring the multiplying power of the first view angle according to the multiplying power of the view angle used by the shooting module, the main body detection data of the first main body and the set image display requirement; the processing module is used for acquiring a second image according to the multiplying power of the first view angle, wherein the second image is a part of the first image corresponding to the main body detection data of the first main body and the first view angle, and the second image meets the image display requirement; and the sending and displaying module is used for outputting the second image to the display module.
The embodiment of the application also provides an electronic chip, the task processing chip is installed in electronic equipment (UE), and the electronic chip comprises: a processor for executing computer program instructions stored on a memory, wherein the computer program instructions, when executed by the processor, trigger an electronic chip to perform the method steps provided by any of the method embodiments of the present application.
The embodiment of the application also provides a terminal device, which comprises a communication module, a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the terminal device is triggered to execute the method steps provided by any method embodiment of the application.
The embodiment of the application also provides a server device, which comprises a communication module, a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the server device to execute the method steps provided by any of the method embodiments of the application.
The embodiment of the application further provides an electronic device, where the electronic device includes a plurality of antennas, a memory for storing computer program instructions, a processor for executing the computer program instructions, and a communication apparatus (such as a communication module capable of implementing 5G communication based on NR protocol), where the computer program instructions, when executed by the processor, trigger the electronic device to execute the method steps provided by any of the method embodiments of the application.
In particular, in an embodiment of the present application, one or more computer programs are stored in the memory, which include instructions that, when executed by the apparatus, cause the apparatus to perform the method steps described in the embodiments of the present application.
Further, the devices, apparatuses, modules illustrated in the embodiments of the present application may be implemented by a computer chip or entity, or by a product having a certain function.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
In several embodiments provided herein, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
In particular, a computer readable storage medium is provided in an embodiment of the present application, where a computer program is stored, when the computer program is run on a computer, to make the computer execute the method steps provided in the embodiment of the present application.
The present embodiments also provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method steps provided by the embodiments of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the elements is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units, implemented in the form of software functional units, may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a Processor (Processor) to perform part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk.
In the present embodiments, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be apparent to those skilled in the art that the same and similar parts of the various embodiments in the present application are referred to each other for convenience and brevity of description. For example, specific working processes of the system, the device and the unit described in the embodiments of the present application may refer to corresponding processes in the embodiments of the method of the present application, which are not described herein again.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the present application, which is defined by the claims.

Claims (10)

1. An image processing method, comprising:
acquiring a first image acquired by a shooting module;
obtaining main body detection data of a first main body according to the first image;
Acquiring the multiplying power of the first view angle according to the multiplying power of the view angle used by the shooting module, the main body detection data of the first main body and the set image display requirement;
acquiring a second image according to the magnification of the first view angle, wherein the second image is a part of the first image corresponding to the main body detection data of the first main body and the first view angle, and the second image meets the image display requirement;
and outputting the second image to a display module.
2. The method according to claim 1, wherein the method further comprises:
acquiring first information for describing a positional relationship between the first subject and a boundary of the first image;
under the condition that the first information does not meet the set position relation requirement, controlling the deflection angle of the prism to change;
the change of the image acquisition range of the shooting module is related to the change of the deflection angle of the prism.
3. The method of claim 2, wherein the body inspection data of the first body comprises a body inspection size of the first body;
The method further comprises the steps of:
acquiring the minimum distance between the boundary corresponding to the first main body and the boundary of the first image in the first direction, and acquiring the minimum distance between the boundary corresponding to the first main body and the boundary of the first image in the second direction;
the boundary corresponding to the first main body is the boundary of the second image or the boundary of the main body detection size of the first main body, and the first direction and the second direction are perpendicular;
the case that the first information does not meet the requirement of the position relation includes: the minimum pitch in the first direction is not greater than any one of a first pitch threshold and the minimum pitch in the second direction is not greater than a second pitch threshold.
4. A method according to claim 3, wherein controlling the change in the deflection angle of the prism in the case where the first information does not meet the set positional relationship requirement comprises:
acquiring a first deflection angle adjustment amount corresponding to the minimum distance in the first direction according to a first preset corresponding relation between the boundary distance and the deflection angle adjustment amount under the condition that the minimum distance in the first direction is not greater than the first distance threshold, and adjusting the deflection angle of the prism in the first direction according to the first deflection angle adjustment amount;
And under the condition that the minimum distance in the second direction is not greater than the second distance threshold, acquiring a second deflection angle adjustment amount corresponding to the minimum distance in the second direction according to a second preset corresponding relation between the boundary distance and the deflection angle adjustment amount, and adjusting the deflection angle of the prism in the second direction according to the second deflection angle adjustment amount.
5. The method according to any one of claims 1-4, wherein obtaining subject detection data of a first subject from the first image comprises:
performing equal-proportion reduction processing on the first image to obtain a third image;
and performing main body detection on the third image according to the pre-acquired main body identification data of the first main body to obtain main body detection data of the first main body.
6. The method of any one of claims 1-4, wherein prior to the acquiring the first image acquired by the capture module, the method further comprises:
responding to the main body selection operation of the image displayed by the display module, taking the selected main body as the first main body, and acquiring main body identification data of the first main body;
The obtaining the main body detection data of the first main body according to the first image comprises the following steps:
and performing main body detection according to the main body identification data of the first main body and the first image to obtain main body detection data of the first main body.
7. An image processing apparatus, comprising:
the acquisition module is used for acquiring the first image acquired by the shooting module;
the sensing module is used for obtaining main body detection data of the first main body according to the first image;
the decision module is used for acquiring the multiplying power of the first view angle according to the multiplying power of the view angle used by the shooting module, the main body detection data of the first main body and the set image display requirement;
the processing module is used for acquiring a second image according to the multiplying power of the first view angle, wherein the second image is a part of the first image corresponding to the main body detection data of the first main body and the first view angle, and the second image meets the image display requirement;
and the sending and displaying module is used for outputting the second image to the display module.
8. An electronic chip, comprising:
a processor for executing computer program instructions stored on a memory, wherein the computer program instructions, when executed by the processor, trigger the electronic chip to perform the method of any of claims 1-6.
9. An electronic device comprising one or more memories for storing computer program instructions, and one or more processors, wherein the computer program instructions, when executed by the one or more processors, trigger the electronic device to perform the method of any of claims 1-6.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method according to any of claims 1-6.
CN202311037882.4A 2023-08-16 2023-08-16 Image processing method, device, chip, electronic equipment and medium Pending CN117714840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311037882.4A CN117714840A (en) 2023-08-16 2023-08-16 Image processing method, device, chip, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311037882.4A CN117714840A (en) 2023-08-16 2023-08-16 Image processing method, device, chip, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117714840A true CN117714840A (en) 2024-03-15

Family

ID=90159434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311037882.4A Pending CN117714840A (en) 2023-08-16 2023-08-16 Image processing method, device, chip, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117714840A (en)

Similar Documents

Publication Publication Date Title
EP3435655B1 (en) Electronic device for acquiring image using plurality of cameras and method for processing image using the same
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113592887B (en) Video shooting method, electronic device and computer-readable storage medium
CN113454982B (en) Electronic device for stabilizing image and method of operating the same
TWI543610B (en) Electronic device and image selection method thereof
CN114339102B (en) Video recording method and equipment
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN114390213B (en) Shooting method and equipment
CN112637500B (en) Image processing method and device
CN113099122A (en) Shooting method, shooting device, shooting equipment and storage medium
CN112954212B (en) Video generation method, device and equipment
CN111479059B (en) Photographing processing method and device, electronic equipment and storage medium
KR102330264B1 (en) Electronic device for playing movie based on movment information and operating mehtod thereof
CN115004685A (en) Electronic device and method for displaying image at electronic device
CN115706850B (en) Image shooting method, device and storage medium
CN111212222A (en) Image processing method, image processing apparatus, electronic apparatus, and storage medium
CN114390186B (en) Video shooting method and electronic equipment
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2011096571A1 (en) Input device
WO2013187282A1 (en) Image pick-up image display device, image pick-up image display method, and storage medium
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
KR20130098675A (en) Face detection processing circuit and image pick-up device including the same
CN117714840A (en) Image processing method, device, chip, electronic equipment and medium
JP6893322B2 (en) Image generation system, image display system, image generation method, image generation program, and moving object
CN111277752B (en) Prompting method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination