CN107395971B - Image acquisition method, image acquisition equipment and computer-readable storage medium - Google Patents
Image acquisition method, image acquisition equipment and computer-readable storage medium Download PDFInfo
- Publication number
- CN107395971B CN107395971B CN201710641284.6A CN201710641284A CN107395971B CN 107395971 B CN107395971 B CN 107395971B CN 201710641284 A CN201710641284 A CN 201710641284A CN 107395971 B CN107395971 B CN 107395971B
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- missing
- acquiring
- preset area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000013507 mapping Methods 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 22
- 238000004891 communication Methods 0.000 claims description 20
- 238000009826 distribution Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 description 26
- 238000005516 engineering process Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 4
- 230000005484 gravity Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000003746 feather Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000006748 scratching Methods 0.000 description 1
- 230000002393 scratching effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses an image acquisition method, which is characterized by comprising the following steps: receiving an image acquisition instruction; responding to the image acquisition instruction, acquiring a first image through a first image acquisition device, and acquiring a second image through a second image acquisition device; if the first image has a missing image, determining the position of the missing image in a preset area to obtain a missing position; and merging the first image and the second image based on the missing position to obtain a target image. The embodiment of the invention also discloses image acquisition equipment and a computer readable storage medium.
Description
Technical Field
The present invention relates to an intelligent photographing technology in the field of photographing, and in particular, to an image capturing method, an image capturing apparatus, and a computer-readable storage medium.
Background
Along with the popularization and development of the terminal, the performance of the terminal is stronger and richer, and the functions are richer and richer. Of course, the photographing function of the terminal is also increasingly powerful. At present, when a panoramic image is shot by using a terminal, a user needs to slowly move the terminal along a shooting reference line to finally obtain the panoramic image.
However, in the prior art, when a user moves the terminal along the shooting reference line, it is difficult to ensure that the terminal moves horizontally along the shooting reference line all the time, and the terminal deviates from the shooting reference line; further, the shot panoramic image is partially lost, and the shooting effect of the panoramic image is reduced.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide an image capturing method, an image capturing apparatus, and a computer-readable storage medium, so as to solve the problem that a user cannot guarantee that a terminal moves horizontally along a shooting reference line all the time when a panoramic image is captured in the existing image capturing technology, so that an obtained panoramic image may be partially lost, implement compensation on the lost part of the panoramic image, and improve the effect of the panoramic image.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a method of image acquisition, the method comprising:
receiving an image acquisition instruction;
responding to the image acquisition instruction, acquiring a first image through a first image acquisition device, and acquiring a second image through a second image acquisition device;
if the first image has a missing image, determining the position of the missing image in a preset area to obtain a missing position;
and merging the first image and the second image based on the missing position to obtain a target image.
Optionally, if the first image has a missing image, determining the position of the missing image in the preset region to obtain a missing position, further comprising:
if the first image has a missing image, determining a first sub-image corresponding to the missing image; wherein the first sub-image is a sub-image that synthesizes the first image;
and determining the position of the missing image of the first sub-image in the preset area to obtain the missing position.
Optionally, the merging the first image and the second image based on the missing position to obtain a target image includes:
acquiring an image corresponding to the missing position from the second image based on a first mapping relation between the first image and the second image to obtain a third image; wherein the first mapping relation is preset and stored;
and splicing the third image to the missing position of the first image to obtain the target image.
Optionally, the obtaining, based on the mapping relationship between the first image and the second image, an image corresponding to the missing position from the second image to obtain a third image includes:
acquiring a second sub-image corresponding to the first sub-image from a second image; wherein the second sub-image is a sub-image that synthesizes the second image;
acquiring an image corresponding to the missing position from the second sub-image based on a second mapping relation between the first sub-image and the second sub-image to obtain a third image; wherein the second mapping relation is preset and stored.
Optionally, the stitching the third image to the missing position of the first image to obtain the target image includes:
splicing the third image to the missing position of the first sub-image to obtain a target sub-image;
and replacing the first sub-image with the target sub-image to obtain the target image.
Optionally, the determining the position of the missing image of the first sub-image in the preset area to obtain the missing position includes:
determining the offset distance from a preset reference line when the first sub-image is acquired;
processing the first sub-image based on the offset distance to obtain a third sub-image;
and determining the position of the missing image of the third sub-image in the preset area to obtain the missing position.
Optionally, the processing the first sub-image based on the offset distance to obtain a third sub-image includes:
and adjusting the position of the first sub-image in the preset area according to the offset distance to obtain the third sub-image.
Optionally, the processing the first sub-image based on the offset distance to obtain a third sub-image further includes:
acquiring a previous sub-image adjacent to the first sub-image from the first image according to the acquisition sequence of the first sub-image to obtain a reference sub-image;
adjusting the distribution position of the first sub-image in the preset area based on the offset distance to obtain a fourth sub-image;
and adjusting the position of the fourth sub-image in the preset area based on the part, with the same image, of the reference sub-image as the fourth sub-image to obtain the third sub-image.
An image acquisition apparatus, the apparatus comprising: the system comprises a processor, a memory, a communication bus and an image collector;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute an image acquisition program stored in the memory to implement the steps of:
receiving an image acquisition instruction;
responding to the image acquisition instruction, acquiring a first image through a first image acquisition device, and acquiring a second image through a second image acquisition device;
if the first image has a missing image, determining the position of the missing image in a preset area to obtain a missing position;
and merging the first image and the second image based on the missing position to obtain a target image.
A computer readable storage medium having stored thereon an image acquisition program which, when executed by a processor, implements the steps of the image acquisition method according to any one of the preceding claims.
The image acquisition method, the image acquisition equipment and the computer-readable storage medium provided by the embodiment of the invention receive the image acquisition instruction and respond to the image acquisition instruction, then acquire a first image through a first image acquisition device, acquire a second image through a second image acquisition device, determine the position of the missing image in a preset area to obtain a missing position if the first image has the missing image, and finally merge the first image and the second image based on the missing position to obtain a target image; therefore, the terminal can combine the image corresponding to the missing position of the first image collected by the first image collector in the second image collected by the second image collector to the missing position of the first image to obtain the target image, the problem that the obtained panoramic image is partially missing due to the fact that a user cannot guarantee that the terminal moves horizontally along a shooting reference line all the time when the panoramic image is collected in the existing image collection technology is solved, the missing part of the panoramic image is compensated, and the effect of the panoramic image is improved.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal implementing various embodiments of the present invention;
fig. 2 is a schematic diagram of a communication network system architecture according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of an image acquisition method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of another image capturing method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of another image acquisition method according to an embodiment of the present invention;
fig. 6 is a schematic view of an application scenario of an image capturing device according to an embodiment of the present invention;
fig. 7 is a schematic view of an application scenario of another image capturing device according to an embodiment of the present invention;
fig. 8 is a schematic view of an application scenario of another image capturing device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and charging functions Entity) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
An embodiment of the present invention provides an image acquisition method, which is shown in fig. 3 and includes the following steps:
Specifically, the step 301 of receiving the image capturing instruction may be implemented by an image capturing device. The image acquisition equipment can be a terminal, and the terminal can be a mobile terminal with a photographing function; examples may include: and the mobile phone, the iPad, the notebook computer and other terminals with the photographing function.
The image capturing instruction may be a photographing instruction sent to the terminal by a user through a display screen of the touch terminal or through voice input after the user opens a photographing application in the terminal, for example, the image capturing instruction may be an instruction for starting the photographing application to capture a panoramic image.
The panoramic image is obtained mainly by adopting a panoramic shooting technology, namely shooting 360 degrees horizontally and 180 degrees vertically by taking a certain point as a center, and splicing a plurality of shot images into a picture. In the panoramic shooting technology, besides the shooting technology, the panoramic shooting technology also includes a panoramic stitching method, and the commonly used panoramic stitching method includes: phase correlation methods, transformation optimization methods, and the like. In the method for aligning images based on geometric features, the overlapping area between two adjacent images for panoramic stitching cannot be too small, for example, cannot be less than 15%; the image splicing algorithm comprises the following steps: defining a mapping model, wherein the commonly used mapping model comprises a spherical surface, a cylindrical surface and a plane, and the spherical surface mapping model is most widely applied; extracting feature points according to the input images, and matching the features to obtain a mapping relation between the input images; thirdly, performing distortion (Warp) transformation on the image according to the mapping relation, and aligning the image; and step four, eliminating color difference between the images by utilizing color adjustment, and eliminating splicing gaps by adopting an image fusion technology. In the first step, the mapping model may be a carrier for image mapping, which corresponds to a transformation that maps a two-dimensional image into a three-dimensional space; in the second step, Feature point extraction and matching can be performed by using a Scale Invariant Feature Transform (SIFT) algorithm, a monitor and descriptor (speed-Up Robust Features) algorithm with Invariant Scale and rotation, an Orb algorithm, and other algorithms; in the fourth step, the commonly used algorithm for adjusting chromatic aberration is a Reinhard method, and the commonly used methods for image fusion include an Alpha average algorithm, a feather fusion algorithm, a Multi-Band (Multi-Band) fusion algorithm and the like.
Specifically, step 302 may be implemented by an image capturing device, in response to the image capturing instruction, capturing a first image by the first image capturing device, and capturing a second image by the second image capturing device. The first image collector and the second image collector can be a photographing application program of the terminal and are used for collecting images. The first image may be a panoramic image synthesized from images acquired by the first image acquirer, or may include a sub-image acquired by the first image acquirer, and similarly, the second image may be a panoramic image synthesized from images acquired by the second image acquirer, or may include a sub-image acquired by the second image acquirer.
The terminal receives the image acquisition instruction and responds to the image acquisition instruction, and images can be acquired through the first image acquirer and the second image acquirer at the same time. The images collected by the first image collector are generally the images which are finally stored, and when the first image collector collects the shot object, the second image collector can also collect the shot object. The second image collector is matched with the first image collector for use, so that the distance of a shot object can be sensed, the shot object and the background can be separated, the background can be subjected to fuzzy processing, the function of adjusting the depth of field of an image in the later period is realized, or different positions are acquired to be used as focuses, different image effects are obtained, namely the functions which can be realized by the second image collector comprise functions of focusing after shooting, scratching, jigsaw puzzle, seamless special effect and the like.
Specifically, in step 303, if the first image has a missing image, determining the position of the missing image in the preset area to obtain the missing position may be implemented by the image acquisition device. The preset area may be an area in which the first image and the second image of the same preset size are previewed. Whether the first image has the missing image or not can be identified by adopting an image identification technology, whether the first image has the missing image or not can be identified, and the position of the missing image in the preset area can be determined, for example, the missing image in the first image can be identified by adopting a gray scale identification technology, and meanwhile, the position of the missing image is obtained to obtain the missing position.
And step 304, merging the first image and the second image based on the missing position to obtain a target image.
Specifically, the step 304 combines the first image and the second image based on the missing position, and the target image is obtained by the image capturing device. The target image may be an image desired by a user and obtained by performing missing image restoration, for example, when the user uses a terminal to perform panoramic shooting, the target image may be a panoramic image desired by the user and obtained by performing missing image restoration.
Obtaining a mapping relation between a first image and a second image based on a relation between an image collected by a first image collector and an image collected by a second image collector set by a manufacturer or a user; then, based on the mapping relation, image information at a position corresponding to the missing position of the first image can be obtained from the second image, and the image information is spliced to the missing position of the first image by adopting an image splicing method, so that the first image without the missing image, namely the target image, is obtained.
The image acquisition method provided by the embodiment of the invention receives an image acquisition instruction and responds to the image acquisition instruction, then a first image is acquired through a first image acquisition device, a second image is acquired through a second image acquisition device, if the first image has a missing image, the position of the missing image in a preset area is determined to obtain a missing position, and finally the first image and the second image are combined to obtain a target image based on the missing position; therefore, the terminal can combine the image corresponding to the missing position of the first image collected by the first image collector in the second image collected by the second image collector to the missing position of the first image to obtain the target image, the problem that the obtained panoramic image is partially missing due to the fact that a user cannot guarantee that the terminal moves horizontally along a shooting reference line all the time when the panoramic image is collected in the existing image collection technology is solved, the missing part of the panoramic image is compensated, and the effect of the panoramic image is improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an image capturing method, which is shown in fig. 4 and includes the following steps:
Specifically, the description is given by taking an example that the image acquisition instruction is an instruction to acquire a panoramic image, and the terminal receives the instruction to acquire the panoramic image.
Specifically, the description is given by taking as an example that the first image collector is a main camera, the second image collector is an auxiliary camera, the first image is a panoramic image synthesized by images collected by the first image collector, and the second image is a panoramic image synthesized by images collected by the second image collector, where a field angle of the main camera is smaller than that of the auxiliary camera, and the terminal starts a panoramic image collection application program in response to an instruction to collect the panoramic image and generates prompt information for prompting a user to move the terminal along a reference line. In the process of moving the terminal by the user, the first image is collected through the main camera, and the second image is collected through the auxiliary camera.
Specifically, the terminal may perform image recognition on the acquired first image, and determine whether the first image has a missing image. If the first image acquired by the main camera has a missing image, the terminal can determine the position of the missing image in the preset area according to an image recognition technology to obtain the missing position.
And step 404, the image acquisition device acquires an image corresponding to the missing position from the second image based on the first mapping relation between the first image and the second image, and obtains a third image.
Wherein, the first mapping relation is preset and stored.
Specifically, the first mapping relationship is preset and stored according to a corresponding relationship between an image acquired by the main camera and an image acquired by the auxiliary camera. And the terminal determines the position corresponding to the missing position of the first image in the second image according to the first mapping relation, and acquires the image at the position corresponding to the missing position of the first image in the second image to obtain a third image.
And 405, splicing the third image to the missing position of the first image by the image acquisition equipment to obtain a target image.
The image acquisition method provided by the embodiment of the invention receives an image acquisition instruction and responds to the image acquisition instruction, then a first image is acquired through a first image acquisition device, a second image is acquired through a second image acquisition device, if the first image has a missing image, the position of the missing image in a preset area is determined to obtain a missing position, and finally the first image and the second image are combined to obtain a target image based on the missing position; therefore, the terminal can combine the image corresponding to the missing position of the first image collected by the first image collector in the second image collected by the second image collector to the missing position of the first image to obtain the target image, the problem that the obtained panoramic image is partially missing due to the fact that a user cannot guarantee that the terminal moves horizontally along a shooting reference line all the time when the panoramic image is collected in the existing image collection technology is solved, the missing part of the panoramic image is compensated, and the effect of the panoramic image is improved.
Based on the foregoing embodiments, an embodiment of the present invention provides an image capturing method, which is shown in fig. 5 and includes the following steps:
Wherein the first sub-image is a sub-image that synthesizes the first image.
Specifically, the first image is a first sub-image set, and the first image may be obtained by synthesizing the terminal while acquiring the sub-images during the moving process, or may be obtained by synthesizing the terminal after acquiring all the sub-images from the start of moving the terminal to the stop of moving the terminal. The terminal judges whether the first image has a missing image, and if the first image has a missing image, determines the first sub-image corresponding to the missing image, that is, judges whether the first sub-image has a missing image, wherein the judgment of whether the missing image exists in each first sub-image in the synthesized first image can be performed at the same time, or the judgment of whether the missing image exists in the acquired first sub-image can be performed according to the sequence of acquiring the first sub-image.
Wherein the second sub-image is a sub-image that synthesizes the second image.
Specifically, the main camera collects a frame of first sub-images, and the auxiliary camera also collects a frame of second sub-frames, so that the first sub-images and the second sub-images have a one-to-one correspondence relationship.
Wherein the second mapping relation is preset and stored.
Specifically, the second mapping relationship is preset and stored according to the mapping relationship between the image acquired by the main camera and the image acquired by the auxiliary camera.
And step 507, splicing the third image to the missing position of the first sub-image by the image acquisition equipment to obtain a target sub-image.
And step 508, replacing the first sub-image with the target sub-image by the image acquisition equipment to obtain a target image.
Based on the above embodiment, the step 504 of determining the position of the missing image of the first sub-image in the preset area by the image capturing device may further be implemented by:
step 504a, the image acquisition device determines an offset distance from a preset reference line when acquiring the first sub-image.
Specifically, the preset reference line may be a horizontal reference line which is preset by a manufacturer and can be displayed on the shooting display screen when the panoramic image is shot, for prompting a user that the user horizontally moves the terminal along the reference line when shooting the panoramic image. The terminal can acquire gravity change through the gravity sensor to obtain the offset distance of the terminal deviating from the preset reference line in the vertical direction when acquiring the current first subimage and the last first subimage.
Step 504b, the image acquisition device processes the first sub-image based on the offset distance to obtain a third sub-image.
Correspondingly, the step 504b of processing the first sub-image by the image acquisition device based on the offset distance to obtain the third sub-image can be implemented in the following manners a or b-d:
a. and the image acquisition equipment adjusts the position of the first sub-image in the preset area according to the offset distance to obtain a third sub-image.
Specifically, the terminal adjusts the position of the acquired first sub-image in the preset area according to the acquired offset distance, so that the phenomenon that the acquired first sub-image is offset when the terminal moves horizontally and is offset in the vertical direction when the panoramic image is acquired in the prior art, and the finally acquired panoramic image is distorted can be eliminated.
Or,
b. the image acquisition equipment acquires a previous sub-image adjacent to the first sub-image from the first image according to the acquisition sequence of the first sub-image to obtain a reference sub-image.
c. And the image acquisition equipment adjusts the distribution position of the first sub-image in the preset area based on the offset distance to obtain a fourth sub-image.
d. And the image acquisition equipment adjusts the position of the fourth sub-image in the preset area based on the part, with the same image, of the reference sub-image as the fourth sub-image to obtain a third sub-image.
Specifically, after the distribution position of the first sub-image in the preset area is adjusted according to the offset distance in the process of steps b-d, the same image in the reference sub-image of the first sub-image as that in the first sub-image is further determined, a specific shooting object is obtained from the same image, the distribution positions of the reference sub-image and the shooting object in the first sub-image in the terminal offset direction in the preset area are obtained, the difference between the two obtained distribution positions is calculated, the first sub-image after the distribution position in the preset area has been adjusted is adjusted again in the preset area based on the difference, and finally the third sub-image is obtained.
And step 504c, the image acquisition equipment determines the position of the missing image of the third sub-image in the preset area to obtain the missing position.
When an application scene of an image acquisition device corresponding to an embodiment of the present invention is a mobile phone, the mobile phone is a rear dual-camera, the rear dual-camera is horizontally arranged, and the specific arrangement is shown in fig. 6, where the rear dual-camera includes a and B, where a is a main camera and B is an auxiliary camera; if the field angle of the camera a is larger than that of the camera B, the image acquired by the camera B at this time includes an image acquired by the camera a, wherein the distribution of the images acquired by the camera a and the camera B in the preset area is as shown in fig. 7, where C is the image acquired by the camera a, D is the image acquired by the camera B, and an image E in the image D acquired by the camera B is the same as the image C acquired by the camera a; assuming that the coordinate positions of the region where the missing image of the image C acquired by the camera a is located are (0,0), (0,1), (1,0), (1,1), firstly, the same image E as the image C can be found according to the preset first mapping relationship between the image C and the images D and E, then, according to the coordinate positions (0,0), (0,1), (1,0), (1,1) of the region where the missing image C is located in the image C, the coordinate positions of the region corresponding to the coordinate position of the missing image region of the image C in the image E are determined to be (1,1), (1,2), (2,1), (2,2) based on the preset second mapping relationship, and the corresponding images M in the ranges of the coordinate positions (1,1), (1,2), (2,1), (2,2) of the region corresponding to the missing image region of the image C in the image E are acquired, the images C without image deletion can be obtained by stitching to the coordinate positions (0,0), (0,1), (1,0) and (1,1) of the region where the missing image of the image C is located, which may be specifically shown in fig. 8.
It should be noted that, for the explanation of the same steps or concepts in the present embodiment as in the other embodiments, reference may be made to the description in the other embodiments, and details are not described here.
The image acquisition method provided by the embodiment of the invention receives an image acquisition instruction and responds to the image acquisition instruction, then a first image is acquired through a first image acquisition device, a second image is acquired through a second image acquisition device, if the first image has a missing image, the position of the missing image in a preset area is determined to obtain a missing position, and finally the first image and the second image are combined to obtain a target image based on the missing position; therefore, the terminal can merge the image corresponding to the missing position of the first image collected by the first image collector in the second image collected by the second image collector to the missing position of the first image to obtain the target image, the problem that the obtained panoramic image is partially missing due to the fact that a user cannot guarantee that the terminal moves horizontally along a shooting reference line all the time when the panoramic image is collected in the existing image collection technology is solved, the missing part of the panoramic image is compensated, and the effect of the panoramic image is improved; furthermore, the problem of panoramic image distortion in the prior art is solved.
An embodiment of the present invention provides an image capturing apparatus 6, which may be applied to an image capturing method provided in the embodiment corresponding to fig. 3 to 5, and as shown in fig. 9, the image capturing apparatus may include: processor 61, memory 62, communication bus 63 and image collector 64, wherein:
the communication bus 63 is used to realize connection communication between the processor 61 and the memory 62.
The processor 61 is configured to execute an image acquisition program stored in the memory 62 to implement the following steps:
and receiving an image acquisition instruction.
And responding to the image acquisition instruction, acquiring a first image through the first image acquirer, and acquiring a second image through the second image acquirer.
And if the first image has a missing image, determining the position of the missing image in the preset area to obtain the missing position.
And merging the first image and the second image based on the missing position to obtain the target image.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
if the first image has a missing image, determining a first sub-image corresponding to the missing image; wherein the first sub-image is a sub-image that synthesizes the first image.
And determining the position of the missing image of the first sub-image in the preset area to obtain the missing position.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
and acquiring an image corresponding to the missing position from the second image based on a first mapping relation between the first image and the second image to obtain a third image.
Wherein, the first mapping relation is preset and stored.
And splicing the third image to the missing position of the first image to obtain a target image.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
a second sub-image corresponding to the first sub-image is acquired from the second image.
Wherein the second sub-image is a sub-image that synthesizes the second image.
And acquiring an image corresponding to the missing position from the second sub-image based on a second mapping relation between the first sub-image and the second sub-image to obtain a third image.
Wherein the second mapping relation is preset and stored.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
and splicing the third image to the missing position of the first sub-image to obtain a target sub-image.
And replacing the first sub-image with the target sub-image to obtain the target image.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
and determining the offset distance from the preset reference line when the first sub-image is acquired.
And processing the first sub-image based on the offset distance to obtain a third sub-image.
And determining the position of the missing image of the third sub-image in the preset area to obtain the missing position.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
and adjusting the position of the first sub-image in the preset area according to the offset distance to obtain a third sub-image.
Specifically, in other embodiments of the present invention, the processor 61 is further configured to execute an image capturing program to implement the following steps:
and according to the acquisition sequence of the first sub-images, acquiring a previous sub-image adjacent to the first sub-image from the first image to obtain a reference sub-image.
And adjusting the distribution position of the first sub-image in the preset area based on the offset distance to obtain a fourth sub-image.
And adjusting the position of the fourth sub-image in the preset area based on the part, having the same image, of the reference sub-image as the fourth sub-image to obtain a third sub-image.
It should be noted that, in the interaction process between steps implemented by the processor in this embodiment, reference may be made to the interaction process in the image acquisition method provided in the embodiments corresponding to fig. 3 to 5, and details are not described here.
The image acquisition equipment provided by the embodiment of the invention receives an image acquisition instruction and responds to the image acquisition instruction, then a first image is acquired through a first image acquisition device, a second image is acquired through a second image acquisition device, if the first image has a missing image, the position of the missing image in a preset area is determined to obtain a missing position, and finally the first image and the second image are combined to obtain a target image based on the missing position; therefore, the terminal can combine the image corresponding to the missing position of the first image collected by the first image collector in the second image collected by the second image collector to the missing position of the first image to obtain the target image, the problem that the obtained panoramic image is partially missing due to the fact that a user cannot guarantee that the terminal moves horizontally along a shooting reference line all the time when the panoramic image is collected in the existing image collection technology is solved, the missing part of the panoramic image is compensated, and the effect of the panoramic image is improved.
Based on the foregoing embodiments, embodiments of the invention provide a computer-readable storage medium storing one or more image acquisition programs, the one or more image acquisition programs being executable by one or more processors to implement the steps of:
and receiving an image acquisition instruction.
And responding to the image acquisition instruction, acquiring a first image through the first image acquirer, and acquiring a second image through the second image acquirer.
And if the first image has a missing image, determining the position of the missing image in the preset area to obtain the missing position.
And merging the first image and the second image based on the missing position to obtain the target image.
Specifically, in another embodiment of the present invention, if there is a missing image in the first image, determining the position of the missing image in the preset area to obtain the missing position includes the following steps:
and if the first image has a missing image, determining a first sub-image corresponding to the missing image.
Wherein the first sub-image is a sub-image that synthesizes the first image.
And determining the position of the missing image of the first sub-image in the preset area to obtain the missing position.
Specifically, in another embodiment of the present invention, merging the first image and the second image based on the missing position to obtain the target image includes the following steps:
and acquiring an image corresponding to the missing position from the second image based on a first mapping relation between the first image and the second image to obtain a third image.
Wherein, the first mapping relation is preset and stored.
And splicing the third image to the missing position of the first image to obtain a target image.
Specifically, in another embodiment of the present invention, the obtaining an image corresponding to the missing position from the second image based on the mapping relationship between the first image and the second image to obtain a third image includes the following steps:
a second sub-image corresponding to the first sub-image is acquired from the second image.
Wherein the second sub-image is a sub-image that synthesizes the second image.
And acquiring an image corresponding to the missing position from the second sub-image based on a second mapping relation between the first sub-image and the second sub-image to obtain a third image.
Wherein the second mapping relation is preset and stored.
Specifically, in another embodiment of the present invention, stitching the third image to the missing position of the first image to obtain the target image includes the following steps:
and splicing the third image to the missing position of the first sub-image to obtain a target sub-image.
And replacing the first sub-image with the target sub-image to obtain the target image.
Specifically, in another embodiment of the present invention, determining a position of a missing image of a first sub-image in a preset area to obtain a missing position includes the following steps:
and determining the offset distance from the preset reference line when the first sub-image is acquired.
And processing the first sub-image based on the offset distance to obtain a third sub-image.
And determining the position of the missing image of the third sub-image in the preset area to obtain the missing position.
Specifically, in other embodiments of the present invention, processing the first sub-image based on the offset distance to obtain a third sub-image includes the following steps:
and adjusting the position of the first sub-image in the preset area according to the offset distance to obtain a third sub-image.
Specifically, in other embodiments of the present invention, processing the first sub-image based on the offset distance to obtain a third sub-image further includes the following steps:
and according to the acquisition sequence of the first sub-images, acquiring a previous sub-image adjacent to the first sub-image from the first image to obtain a reference sub-image.
And adjusting the distribution position of the first sub-image in the preset area based on the offset distance to obtain a fourth sub-image.
And adjusting the position of the fourth sub-image in the preset area based on the part, having the same image, of the reference sub-image as the fourth sub-image to obtain a third sub-image.
The computer-readable storage medium provided by the embodiment of the invention receives an image acquisition instruction and responds to the image acquisition instruction, then a first image is acquired by a first image acquisition device, a second image is acquired by a second image acquisition device, if the first image has a missing image, the position of the missing image in a preset area is determined to obtain a missing position, and finally the first image and the second image are combined to obtain a target image based on the missing position; therefore, the terminal can combine the image corresponding to the missing position of the first image collected by the first image collector in the second image collected by the second image collector to the missing position of the first image to obtain the target image, the problem that the obtained panoramic image is partially missing due to the fact that a user cannot guarantee that the terminal moves horizontally along a shooting reference line all the time when the panoramic image is collected in the existing image collection technology is solved, the missing part of the panoramic image is compensated, and the effect of the panoramic image is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods described in the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (7)
1. An image acquisition method, characterized in that the method comprises:
receiving an image acquisition instruction;
responding to the image acquisition instruction, acquiring a first image through a first image acquisition device, and acquiring a second image through a second image acquisition device;
if the first image has a missing image, determining the position of the missing image in a preset area to obtain a missing position;
merging the first image and the second image based on the missing position to obtain a target image;
wherein, if the first image has a missing image, determining the position of the missing image in the preset region to obtain the missing position includes:
if the first image has a missing image, determining a first sub-image corresponding to the missing image; wherein the first sub-image is a sub-image that synthesizes the first image;
determining the offset distance from a preset reference line when the first sub-image is acquired;
processing the first sub-image based on the offset distance to obtain a third sub-image;
determining the position of the missing image of the third sub-image in the preset area to obtain the missing position;
processing the first sub-image based on the offset distance to obtain a third sub-image, including:
acquiring a previous sub-image adjacent to the first sub-image from the first image according to the acquisition sequence of the first sub-image to obtain a reference sub-image;
adjusting the distribution position of the first sub-image in the preset area based on the offset distance to obtain a fourth sub-image;
and adjusting the position of the fourth sub-image in the preset area based on the part, with the same image, of the reference sub-image as the fourth sub-image to obtain the third sub-image.
2. The method of claim 1, wherein merging the first image and the second image based on the missing location to obtain a target image comprises:
acquiring an image corresponding to the missing position from the second image based on a first mapping relation between the first image and the second image to obtain a third image; wherein the first mapping relation is preset and stored;
and splicing the third image to the missing position of the first image to obtain the target image.
3. The method according to claim 2, wherein the obtaining an image corresponding to the missing position from the second image based on the mapping relationship between the first image and the second image to obtain a third image comprises:
acquiring a second sub-image corresponding to the first sub-image from a second image; wherein the second sub-image is a sub-image that synthesizes the second image;
acquiring an image corresponding to the missing position from the second sub-image based on a second mapping relation between the first sub-image and the second sub-image to obtain a third image; wherein the second mapping relation is preset and stored.
4. The method of claim 3, wherein the stitching the third image to the missing position of the first image to obtain the target image comprises:
splicing the third image to the missing position of the first sub-image to obtain a target sub-image;
and replacing the first sub-image with the target sub-image to obtain the target image.
5. The method of claim 1, wherein the processing the first sub-image based on the offset distance to obtain a third sub-image, further comprises:
and adjusting the position of the first sub-image in the preset area according to the offset distance to obtain the third sub-image.
6. An image acquisition apparatus, characterized in that the apparatus comprises: the system comprises a processor, a memory, a communication bus and an image collector;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute an image acquisition program stored in the memory to implement the steps of:
receiving an image acquisition instruction;
responding to the image acquisition instruction, acquiring a first image through a first image acquisition device, and acquiring a second image through a second image acquisition device;
if the first image has a missing image, determining the position of the missing image in a preset area to obtain a missing position;
merging the first image and the second image based on the missing position to obtain a target image;
wherein, if the first image has a missing image, determining the position of the missing image in the preset region to obtain the missing position includes:
if the first image has a missing image, determining a first sub-image corresponding to the missing image; wherein the first sub-image is a sub-image that synthesizes the first image;
determining the offset distance from a preset reference line when the first sub-image is acquired;
processing the first sub-image based on the offset distance to obtain a third sub-image;
determining the position of the missing image of the third sub-image in the preset area to obtain the missing position;
processing the first sub-image based on the offset distance to obtain a third sub-image, including:
acquiring a previous sub-image adjacent to the first sub-image from the first image according to the acquisition sequence of the first sub-image to obtain a reference sub-image;
adjusting the distribution position of the first sub-image in the preset area based on the offset distance to obtain a fourth sub-image;
and adjusting the position of the fourth sub-image in the preset area based on the part, with the same image, of the reference sub-image as the fourth sub-image to obtain the third sub-image.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an image acquisition program which, when executed by a processor, implements the steps of the image acquisition method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710641284.6A CN107395971B (en) | 2017-07-31 | 2017-07-31 | Image acquisition method, image acquisition equipment and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710641284.6A CN107395971B (en) | 2017-07-31 | 2017-07-31 | Image acquisition method, image acquisition equipment and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107395971A CN107395971A (en) | 2017-11-24 |
CN107395971B true CN107395971B (en) | 2020-06-12 |
Family
ID=60344052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710641284.6A Active CN107395971B (en) | 2017-07-31 | 2017-07-31 | Image acquisition method, image acquisition equipment and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107395971B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110012209B (en) * | 2018-01-05 | 2020-08-14 | Oppo广东移动通信有限公司 | Panoramic image generation method and device, storage medium and electronic equipment |
CN108898171B (en) * | 2018-06-20 | 2022-07-22 | 深圳市易成自动驾驶技术有限公司 | Image recognition processing method, system and computer readable storage medium |
CN109474786B (en) * | 2018-12-24 | 2021-07-23 | 维沃移动通信有限公司 | Preview image generation method and terminal |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1492335B1 (en) * | 2003-06-27 | 2006-12-13 | Pioneer Corporation | Video signal processing apparatus |
JP4006296B2 (en) * | 2002-08-21 | 2007-11-14 | 倉敷紡績株式会社 | Displacement measuring method and displacement measuring apparatus by photogrammetry |
CN105208259A (en) * | 2014-06-17 | 2015-12-30 | 中兴通讯股份有限公司 | Method for optimizing camera autofocus and camera |
CN106096043A (en) * | 2016-06-24 | 2016-11-09 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN106131450A (en) * | 2016-07-29 | 2016-11-16 | 努比亚技术有限公司 | A kind of method of image procossing, device and terminal |
CN106231204A (en) * | 2016-08-30 | 2016-12-14 | 宇龙计算机通信科技(深圳)有限公司 | Stabilization photographic method based on dual camera and device, terminal |
CN106961549A (en) * | 2017-03-03 | 2017-07-18 | 宇龙计算机通信科技(深圳)有限公司 | A kind of image pickup method and device based on dual camera |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100480740B1 (en) * | 2004-07-23 | 2005-04-07 | 엘지전자 주식회사 | Method for Displaying Twin Picture of Display |
JP2007096392A (en) * | 2005-09-27 | 2007-04-12 | Alpine Electronics Inc | On-vehicle video reproducing apparatus |
-
2017
- 2017-07-31 CN CN201710641284.6A patent/CN107395971B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4006296B2 (en) * | 2002-08-21 | 2007-11-14 | 倉敷紡績株式会社 | Displacement measuring method and displacement measuring apparatus by photogrammetry |
EP1492335B1 (en) * | 2003-06-27 | 2006-12-13 | Pioneer Corporation | Video signal processing apparatus |
CN105208259A (en) * | 2014-06-17 | 2015-12-30 | 中兴通讯股份有限公司 | Method for optimizing camera autofocus and camera |
CN106096043A (en) * | 2016-06-24 | 2016-11-09 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN106131450A (en) * | 2016-07-29 | 2016-11-16 | 努比亚技术有限公司 | A kind of method of image procossing, device and terminal |
CN106231204A (en) * | 2016-08-30 | 2016-12-14 | 宇龙计算机通信科技(深圳)有限公司 | Stabilization photographic method based on dual camera and device, terminal |
CN106961549A (en) * | 2017-03-03 | 2017-07-18 | 宇龙计算机通信科技(深圳)有限公司 | A kind of image pickup method and device based on dual camera |
Also Published As
Publication number | Publication date |
---|---|
CN107395971A (en) | 2017-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107948360B (en) | Shooting method of flexible screen terminal, terminal and computer readable storage medium | |
CN110072061B (en) | Interactive shooting method, mobile terminal and storage medium | |
CN107959795B (en) | Information acquisition method, information acquisition equipment and computer readable storage medium | |
CN107948530B (en) | Image processing method, terminal and computer readable storage medium | |
CN107820014A (en) | A kind of image pickup method, mobile terminal and computer-readable storage medium | |
CN107707821B (en) | Distortion parameter modeling method and device, correction method, terminal and storage medium | |
CN110086993B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN107105166B (en) | Image photographing method, terminal, and computer-readable storage medium | |
CN107133939A (en) | A kind of picture synthesis method, equipment and computer-readable recording medium | |
CN111327840A (en) | Multi-frame special-effect video acquisition method, terminal and computer readable storage medium | |
CN111885307B (en) | Depth-of-field shooting method and device and computer readable storage medium | |
CN112188082A (en) | High dynamic range image shooting method, shooting device, terminal and storage medium | |
CN109194874A (en) | Photographic method, device, terminal and computer readable storage medium | |
CN109120858B (en) | Image shooting method, device, equipment and storage medium | |
CN109710159B (en) | Flexible screen response method and device and computer readable storage medium | |
CN107979727A (en) | A kind of document image processing method, mobile terminal and computer-readable storage medium | |
CN111866388B (en) | Multiple exposure shooting method, equipment and computer readable storage medium | |
CN109672822A (en) | A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium | |
CN112511741A (en) | Image processing method, mobile terminal and computer storage medium | |
CN111654628A (en) | Video shooting method and device and computer readable storage medium | |
CN107395971B (en) | Image acquisition method, image acquisition equipment and computer-readable storage medium | |
CN109739414B (en) | Picture processing method, mobile terminal and computer readable storage medium | |
CN108600639B (en) | Portrait image shooting method, terminal and computer readable storage medium | |
CN112135060B (en) | Focusing processing method, mobile terminal and computer storage medium | |
CN112135045A (en) | Video processing method, mobile terminal and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200429 Address after: Room 614, No. 65, Chifeng Road, Yangpu District, Shanghai Applicant after: SHANGHAI NEW EYES MEDICAL Inc. Address before: 518000 Guangdong Province, Shenzhen high tech Zone of Nanshan District City, No. 9018 North Central Avenue's innovation building A, 6-8 layer, 10-11 layer, B layer, C District 6-10 District 6 floor Applicant before: NUBIA TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |