CN110913140A - Shooting information prompting method and electronic equipment - Google Patents

Shooting information prompting method and electronic equipment Download PDF

Info

Publication number
CN110913140A
CN110913140A CN201911195326.3A CN201911195326A CN110913140A CN 110913140 A CN110913140 A CN 110913140A CN 201911195326 A CN201911195326 A CN 201911195326A CN 110913140 A CN110913140 A CN 110913140A
Authority
CN
China
Prior art keywords
shooting
information
images
score
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911195326.3A
Other languages
Chinese (zh)
Other versions
CN110913140B (en
Inventor
林义凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911195326.3A priority Critical patent/CN110913140B/en
Publication of CN110913140A publication Critical patent/CN110913140A/en
Application granted granted Critical
Publication of CN110913140B publication Critical patent/CN110913140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a shooting information prompting method and electronic equipment, wherein the method comprises the following steps: determining preview video information; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.

Description

Shooting information prompting method and electronic equipment
Technical Field
The present invention relates to the field of electronic devices, and in particular, to a shooting information prompting method and an electronic device.
Background
With the development of science and technology, electronic equipment becomes indispensable equipment in people's life at present. Electronic devices on the market today are focused too much on the effect of taking pictures, such as beautifying the taken picture, however, even a beautified picture is less than satisfactory.
Disclosure of Invention
The embodiment of the invention provides a shooting information prompting method and electronic equipment, and aims to solve the problem that a shot image does not meet the requirements of a user in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for prompting shooting information, including: determining preview video information; constructing a 3D model according to the preview video information; determining at least one shooting parameter in the 3D model; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; and outputting prompt information of the target shooting area corresponding to each target image on a shooting interface.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes: the first determining module is used for determining preview video information; the construction module is used for constructing a 3D model according to the preview video information; a second determination module for determining at least one shooting parameter in the 3D model; the shooting module is used for shooting according to the at least one shooting parameter to generate N images; the third determining module is used for determining each target image with the score larger than the preset score in the N images; and the output module is used for outputting prompt information of the target shooting area corresponding to each target image on a shooting interface.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the shooting information prompting method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the shooting information prompting method are implemented.
In the embodiment of the invention, the video information is previewed by determining; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for prompting shooting information according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a shooting information prompting method according to a second embodiment of the present invention;
FIG. 3 is a diagram of a second embodiment of a camera interface;
FIG. 4 is a second schematic diagram of a second photographing interface according to a second embodiment of the present invention;
fig. 5 is a block diagram of an electronic device according to a third embodiment of the present invention;
fig. 6 is a block diagram of the mechanism of an electronic apparatus according to a fourth embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a method for prompting shooting information according to a first embodiment of the present invention is shown.
The shooting information prompting method provided by the embodiment of the invention comprises the following steps:
step 101: preview video information is determined.
Depending on the ultra-high speed transmission of 5G, a full-angle range of preview video information is shot before the user shoots.
Step 102: and constructing a 3D model according to the preview video information.
The preview video information can be transmitted to a server, the server constructs a 3D model according to the preview video information, and the electronic equipment can also construct the 3D model based on the preview video information.
A 3D model is a three-dimensional, stereoscopic model. The 3D model is a three-dimensional model built using three-dimensional software, including various buildings, people, vegetation, machinery, and the like.
Step 103: in the 3D model, at least one acquisition parameter is determined.
The shooting parameters may be height information of the electronic device and a horizontal plane, angle information of the electronic device and the horizontal plane, position information of the electronic device, a distance between the electronic device and a user to be shot, coordinate information of the user to be shot, position information of the electronic device, and the like.
Step 104: and shooting according to the at least one shooting parameter to generate N images.
And shooting the 3D model based on different shooting parameters to generate a plurality of groups of images, wherein each group of images corresponds to N images.
Step 105: and determining each target image with the score larger than the preset score in the N images.
And grading the images generated by each group of shooting parameters, and acquiring target images with the grades larger than preset grades.
Step 106: and outputting prompt information of the target shooting area corresponding to each target image on the shooting interface.
The target shooting area is output on the shooting interface of the electronic equipment, and besides the target shooting area, prompt information can be displayed to prompt the user how many meters the user goes forward, and how many meters the electronic equipment is too lifted, so that the user can locally prompt information to reach the target shooting area.
The user can shoot more perfect images in the target shooting area, and the use experience of the user is improved.
In the embodiment of the invention, the video information is previewed by determining; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.
Example two
Referring to fig. 2, a flowchart illustrating steps of a shooting information prompting method according to a second embodiment of the present invention is shown.
The shooting information prompting method provided by the embodiment of the invention comprises the following steps:
step 201: preview video information is determined.
Depending on the ultra-high speed transmission of 5G, a full-angle range of preview video information is shot before the user shoots.
Step 202: and constructing a 3D model according to the preview video information.
The preview video information can be transmitted to a server, the server constructs a 3D model according to the preview video information, and the electronic equipment can also construct the 3D model based on the preview video information.
Step 203: in the 3D model, at least one acquisition parameter is determined.
The shooting parameters may be height information of the electronic device and a horizontal plane, angle information of the electronic device and the horizontal plane, position information of the electronic device, a distance between the electronic device and a user to be shot, coordinate information of the user to be shot, position information of the electronic device, and the like.
Step 204: and taking the position information of the electronic equipment as a shooting point, and shooting the 3D model based on different height information and different angle information to generate N images.
And each height information is in a first preset height range, and each angle information is in a first preset angle range.
The shooting parameters comprise height information of the electronic equipment and a horizontal plane, angle information of the electronic equipment and the horizontal plane and position information of the electronic equipment.
Equally dividing the height information into a parts, and respectively recording the heights as Hi; the angle information is divided into b parts, and each angle is recorded as Aj, wherein the first preset angle range can be set to be-45 degrees, and the first preset height range can be set to be 1-2 m. Based on the different height information, and the different angle information, N photographs were taken.
And when the shooting parameters comprise the distance between the electronic equipment and the user to be shot, shooting the 3D model based on different shooting distances to generate N images.
When the shooting parameters comprise the distance between the electronic equipment and a user to be shot, the preview image in the shooting interface is firstly identified, the background information in the preview image is identified, when the background information is characterized to be outdoor, the method is suitable for long-distance shooting, and when the background information is characterized to be indoor, the method is suitable for short-distance shooting. And after the shooting type is determined, generating a target shooting range according to the distance between the electronic equipment and the user to be shot.
The method comprises the steps of taking a user to be shot as an origin, dividing the distance between the electronic equipment and the user to be shot into L parts, shooting the user to be shot based on different distances, calculating different images to be scored, taking the scoring score and the distance between the electronic equipment and the user to be shot as variables, analyzing by a nonlinear programming method, and obtaining a peak point of a desurface image, wherein the peak point is the optimal shooting distance.
Displaying the optimal shooting distance to a user holding the electronic equipment in real time on a shooting interface, and outputting guide information according to the optimal shooting distance and the position information of the user currently holding the electronic equipment, for example: go 1m forward, 2m left, etc.
The user can find the best shooting distance according to the guidance of the guidance information, the user holding the electronic equipment is prevented from walking away, and the shooting time of the user is saved.
The shooting parameters comprise coordinate information of a user to be shot and position information of the electronic equipment, and the position information of the electronic equipment is taken as a shooting point, and a first view length in the transverse axis direction and a second view length in the transverse axis direction of the coordinate information of the user to be shot are respectively selected; equally dividing the first field of view length and the second field of view length into n parts and m parts respectively; based on different first view field lengths and second view field lengths, shooting N images; wherein the product of N and m is N. Grading the N images according to a second preset rule to obtain a second score of each image; generating a second three-dimensional curved surface graph according to the first view field lengths, the second view field lengths and the second values; the first visual field length and the second visual field length are used as horizontal coordinates, and the second score is used as a height coordinate; and acquiring each target image corresponding to the wave crest of the second three-dimensional curved surface image.
Starting from the angle view of the electronic equipment, setting the position information of the electronic equipment as an origin, and setting the coordinate information of the user to be shot as (x, y). Selecting the visual field lengths of x dimension and y dimension respectively, equally dividing the visual field lengths into n parts and m parts, wherein the corresponding coordinates are as follows: xi and Yj;
and shooting the 3D model to generate N images under different conditions of (Xi, Yj) (0< i < N,0< j < m) coordinates of the user to be shot.
And scoring the N images according to a second preset rule, wherein the second preset rule can be as follows:
three-component diagram construction method: dividing a shooting interface into three equal parts by using two straight lines respectively in the transverse direction and the longitudinal direction, wherein the four straight lines form four cross points; when the subject is located at these intersections, the photograph taken is more harmonious.
Diagonal construction method: when the user to be shot is a standing portrait, the left foot of the user to be shot extends forwards, the toe points to the lower right corner of the photo frame, and the image score is higher.
And performing nonlinear fitting on the three variables, namely generating a second three-dimensional surface map by using the first view field length, the second view field length and the second value, and acquiring each target image corresponding to the peak of the second three-dimensional surface map.
Step 205: and scoring the N images according to a first preset rule to obtain a first score of each image.
The first preset rule may be that when the image contains a portrait, the judgment is performed according to a portrait scoring standard, and if the image is a natural scene, the judgment is performed according to a three-dimensional mapping method. The trisection method composition means that the picture is divided into three parts transversely, and the main body form can be placed in the center of each part, so that the composition is suitable for the main body with multi-form parallel focuses.
Further, the images can be scored according to the light sensation generated at different angles.
It should be noted that the first preset rule is not limited to the above, and those skilled in the art do not specifically limit the first preset rule.
Step 206: and generating a first three-dimensional curved surface graph according to the height information, the angle information and the first score.
Wherein the height information and the angle information are taken as horizontal coordinates, and the first score is taken as a height coordinate.
Step 207: and acquiring each target image corresponding to the wave crest of the first three-dimensional curved surface image.
The wave crest of the first three-dimensional curved surface image is each target image with higher grade, and the target images correspond to different height information and angle information.
Step 208: and outputting prompt information of the target shooting area corresponding to each target image on the shooting interface.
The range of optimal height information and angle information is presented to the user at the capture interface, and guidance information may be given at the capture interface, such as: the electronic device is too high by 50cm, the electronic device is rotated by 15 degrees, and the like, and the user can make adjustment according to the guide information.
The 3D model can be constructed on the electronic device according to the preview information, or the preview video information can be transmitted to the server, constructed on the server side, and the 3D model is photographed based on different photographing parameters to generate N images, and the N images are scored, processed by the electronic device, or implemented on the terminal side.
In addition to generating the target photographing region according to a set of photographing parameters, target parameter information may be generated based on height information of the electronic device from a horizontal plane, angle information of the electronic device from the horizontal plane, position information of the electronic device, a distance between the electronic device and a user to be photographed, coordinate information of the user to be photographed, and position information of the electronic device, and guidance information may be output on a photographing interface, which is a photographing interface diagram, as shown in fig. 3 and 4. The user can reach the best shooting position according to the guide information, and adjust the electronic equipment to the best shooting height and angle to shoot the best image.
When a user adjusts the position information of the electronic equipment according to the target shooting area, video information is obtained in real time; and correcting the 3D model according to the video information.
When the user carries out visual angle adjustment in the shooting process, the electronic equipment automatically captures preview video information in real time and transmits the preview video information to a remote server or continuously improves the constructed 3D model through the electronic equipment so as to correct the 3D model.
The 3D model is corrected, so that the subsequent shot images are more accurate, and the target shooting area is more accurately output.
In the embodiment of the invention, the video information is previewed by determining; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.
EXAMPLE III
Referring to fig. 5, a block diagram of an electronic device according to a third embodiment of the present invention is shown.
The electronic equipment provided by the embodiment of the invention comprises: a first determining module 301, configured to determine preview video information; a building module 302, configured to build a 3D model according to the preview video information; a second determining module 303, configured to determine at least one shooting parameter in the 3D model; a shooting module 304, configured to perform shooting according to the at least one shooting parameter to generate N images; a third determining module 305, configured to determine target images with scores greater than a preset score from among the N images; and the output module 306 is configured to output, on a shooting interface, prompt information of a target shooting area corresponding to each target image.
In the embodiment of the invention, the video information is previewed by determining; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.
Example four
Referring to fig. 6, a block diagram of an electronic device according to a fourth embodiment of the present invention is shown.
The electronic equipment provided by the embodiment of the invention comprises: a first determining module 401, configured to determine preview video information; a constructing module 402, configured to construct a 3D model according to the preview video information; a second determining module 403, configured to determine at least one shooting parameter in the 3D model; a shooting module 404, configured to perform shooting according to the at least one shooting parameter to generate N images; a third determining module 405, configured to determine each target image of the N images, where the score is greater than a preset score; and an output module 406, configured to output, on a shooting interface, prompt information of a target shooting area corresponding to each target image.
Preferably, the shooting parameters include height information of the electronic device from a horizontal plane, angle information of the electronic device from the horizontal plane, and position information of the electronic device, and the shooting module 404 includes: the first shooting submodule 4041 is configured to take the position information of the electronic device as a shooting point, and shoot the 3D model based on the different height information and the different angle information to generate N images, where each height information is within a first preset height range, and each angle information is within a first preset angle range.
Preferably, the third determining module 405 comprises: the first determining submodule 4051 is configured to score the N images according to a first preset rule to obtain a first score of each image; the first generating submodule 4052 is configured to generate a first three-dimensional surface map according to the height information, the angle information, and the first score; wherein the height information and the angle information are taken as horizontal coordinates, and the first score is taken as a height coordinate; the first obtaining sub-module 4053 is configured to obtain each target image corresponding to a peak of the first three-dimensional curved surface map.
Preferably, the shooting parameters include a distance between the electronic device and a user to be shot, and the shooting module 404 includes: the second shooting submodule 4042 is configured to shoot the 3D model based on different shooting distances to generate N images.
Preferably, the shooting parameters include coordinate information of a user to be shot and position information of the electronic device, and the shooting module 404 includes: the selecting submodule 4043 is configured to select, with the position information of the electronic device as a shooting point, a first field of view length in the horizontal axis direction and a second field of view length in the horizontal axis direction of the coordinate information of the user to be shot, respectively; a halving module 4044, configured to halve the first view length and the second view length into n parts and m parts, respectively; a third shooting sub-module 4045, configured to shoot N images based on different first and second field of view lengths; wherein the product of N and m is N.
Preferably, the third determining module 405 comprises: the second determining submodule 4054 is configured to score the N images according to a second preset rule, so as to obtain a second score of each image; the second generating submodule 4055 is configured to generate a second three-dimensional surface map according to each of the first view length, the second view length, and the second score; wherein the first field of view length and the second field of view length are taken as horizontal coordinates, and the second score is taken as a height coordinate; the second obtaining sub-module 4056 is configured to obtain each target image corresponding to a peak of the second three-dimensional curved surface map.
Preferably, the electronic device further includes: an obtaining module 407, configured to obtain video information in real time when a user adjusts the position information of the electronic device according to the target shooting area after the output module 406 outputs the prompt information of the target shooting area corresponding to each target image on a shooting interface; a modification module 408, configured to modify the 3D model according to the video information.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
In the embodiment of the invention, the video information is previewed by determining; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.
EXAMPLE five
Referring to fig. 7, a hardware structure diagram of an electronic device for implementing various embodiments of the present invention is shown.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510 for determining preview video information; constructing a 3D model according to the preview video information; determining at least one shooting parameter in the 3D model; shooting the 3D model to generate N images according to each shooting parameter; determining each target image with the score larger than a preset score in the N images; and outputting the target shooting area corresponding to each target image on a shooting interface for the user to adjust.
In the embodiment of the invention, the video information is previewed by determining; constructing a 3D model according to the preview video information; in the 3D model, determining at least one shooting parameter; shooting according to the at least one shooting parameter to generate N images; determining each target image with the score larger than a preset score in the N images; the prompt information of the target shooting area corresponding to each target image is output on the shooting interface, and the target shooting area can be generated based on the preview video information in the shooting process, so that the user can be automatically prompted to shoot, the user does not need to manually find the best shooting area, and the shot images can better meet the requirements of the user.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 7, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the above-mentioned shooting information prompting method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting information prompting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A shooting information prompting method is applied to electronic equipment and is characterized by comprising the following steps:
determining preview video information;
constructing a 3D model according to the preview video information;
determining at least one shooting parameter in the 3D model;
shooting according to the at least one shooting parameter to generate N images;
determining each target image with the score larger than a preset score in the N images;
and outputting prompt information of the target shooting area corresponding to each target image on a shooting interface.
2. The method according to claim 1, wherein the shooting parameters include height information of the electronic device from a horizontal plane, angle information of the electronic device from the horizontal plane, and position information of the electronic device, and the step of generating N images by shooting according to the at least one shooting parameter comprises:
and taking the position information of the electronic equipment as a shooting point, and shooting the 3D model based on different height information and different angle information to generate N images, wherein each piece of height information is in a first preset height range, and each piece of angle information is in a first preset angle range.
3. The method of claim 2, wherein the step of determining each target image of the N images with a score greater than a predetermined score comprises:
the method comprises the steps of grading N images according to a first preset rule to obtain a first score of each image;
generating a first three-dimensional surface graph according to the height information, the angle information and the first score; wherein the height information and the angle information are taken as horizontal coordinates, and the first score is taken as a height coordinate;
and acquiring each target image corresponding to the wave crest of the first three-dimensional curved surface image.
4. The method according to claim 1, wherein the shooting parameters include a distance between the electronic device and a user to be shot, and the step of generating N images by shooting according to the at least one shooting parameter includes:
and shooting the 3D model based on different shooting distances to generate N images.
5. The method according to claim 1, wherein the shooting parameters include coordinate information of a user to be shot and position information of the electronic device, and the step of generating N images by shooting according to the at least one shooting parameter includes:
respectively selecting a first view length in the transverse axis direction and a second view length in the transverse axis direction of the coordinate information of the user to be shot by taking the position information of the electronic equipment as a shooting point;
equally dividing the first field of view length and the second field of view length into n parts and m parts, respectively;
based on different first view field lengths and second view field lengths, shooting N images;
wherein the product of N and m is N.
6. The method of claim 5, wherein the step of determining each target image of the N images with a score greater than a predetermined score comprises:
scoring the N images according to a second preset rule to obtain a second score of each image;
generating a second three-dimensional curved surface graph according to the first view field length, the second view field length and the second value; wherein the first field of view length and the second field of view length are taken as horizontal coordinates, and the second score is taken as a height coordinate;
and acquiring each target image corresponding to the wave crest of the second three-dimensional curved surface image.
7. The method according to claim 1, wherein after the step of outputting the prompt information of the target shooting area corresponding to each target image on the shooting interface, the method further comprises:
when the user adjusts the position information of the electronic equipment according to the target shooting area, video information is obtained in real time;
and correcting the 3D model according to the video information.
8. An electronic device, characterized in that the electronic device comprises:
the first determining module is used for determining preview video information;
the construction module is used for constructing a 3D model according to the preview video information;
a second determination module for determining at least one shooting parameter in the 3D model;
the shooting module is used for shooting according to the at least one shooting parameter to generate N images;
the third determining module is used for determining each target image with the score larger than the preset score in the N images;
and the output module is used for outputting prompt information of the target shooting area corresponding to each target image on a shooting interface.
9. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the photographing information prompting method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the photographing information presentation method according to any one of claims 1 to 7.
CN201911195326.3A 2019-11-28 2019-11-28 Shooting information prompting method and electronic equipment Active CN110913140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195326.3A CN110913140B (en) 2019-11-28 2019-11-28 Shooting information prompting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195326.3A CN110913140B (en) 2019-11-28 2019-11-28 Shooting information prompting method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110913140A true CN110913140A (en) 2020-03-24
CN110913140B CN110913140B (en) 2021-05-28

Family

ID=69820368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195326.3A Active CN110913140B (en) 2019-11-28 2019-11-28 Shooting information prompting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110913140B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724437A (en) * 2020-06-17 2020-09-29 深圳市商汤科技有限公司 Visual positioning method and related device, equipment and storage medium
CN111935393A (en) * 2020-06-28 2020-11-13 百度在线网络技术(北京)有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112188088A (en) * 2020-09-21 2021-01-05 厦门大学 Underwater self-photographing system
WO2023092380A1 (en) * 2021-11-25 2023-06-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of suggesting shooting position and posture for electronic device having camera, electronic device and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104145474A (en) * 2011-12-07 2014-11-12 英特尔公司 Guided image capture
CN104869317A (en) * 2015-06-02 2015-08-26 广东欧珀移动通信有限公司 Intelligent device shooting method and device
CN107690673A (en) * 2017-08-24 2018-02-13 深圳前海达闼云端智能科技有限公司 Image processing method and device and server
US20190253614A1 (en) * 2018-02-15 2019-08-15 Adobe Inc Smart guide to capture digital images that align with a target image model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104145474A (en) * 2011-12-07 2014-11-12 英特尔公司 Guided image capture
CN104869317A (en) * 2015-06-02 2015-08-26 广东欧珀移动通信有限公司 Intelligent device shooting method and device
CN107690673A (en) * 2017-08-24 2018-02-13 深圳前海达闼云端智能科技有限公司 Image processing method and device and server
US20190253614A1 (en) * 2018-02-15 2019-08-15 Adobe Inc Smart guide to capture digital images that align with a target image model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724437A (en) * 2020-06-17 2020-09-29 深圳市商汤科技有限公司 Visual positioning method and related device, equipment and storage medium
CN111724437B (en) * 2020-06-17 2022-08-05 深圳市商汤科技有限公司 Visual positioning method and related device, equipment and storage medium
CN111935393A (en) * 2020-06-28 2020-11-13 百度在线网络技术(北京)有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112188088A (en) * 2020-09-21 2021-01-05 厦门大学 Underwater self-photographing system
WO2023092380A1 (en) * 2021-11-25 2023-06-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of suggesting shooting position and posture for electronic device having camera, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
CN110913140B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN110913140B (en) Shooting information prompting method and electronic equipment
CN109361865B (en) Shooting method and terminal
CN108184050B (en) Photographing method and mobile terminal
CN110557575B (en) Method for eliminating glare and electronic equipment
CN110365907B (en) Photographing method and device and electronic equipment
CN108038825B (en) Image processing method and mobile terminal
CN109660723B (en) Panoramic shooting method and device
CN108153422B (en) Display object control method and mobile terminal
CN110113528B (en) Parameter obtaining method and terminal equipment
CN110213485B (en) Image processing method and terminal
CN108924412B (en) Shooting method and terminal equipment
CN109685915B (en) Image processing method and device and mobile terminal
CN109474786B (en) Preview image generation method and terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN110602389B (en) Display method and electronic equipment
CN108881544B (en) Photographing method and mobile terminal
US20230014409A1 (en) Detection result output method, electronic device and medium
CN108174110B (en) Photographing method and flexible screen terminal
CN109819166B (en) Image processing method and electronic equipment
CN109361874B (en) Photographing method and terminal
CN111177420A (en) Multimedia file display method, electronic equipment and medium
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110312070B (en) Image processing method and terminal
CN110908517A (en) Image editing method, image editing device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant