CN113012160A - Image processing method, image processing device, terminal equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN113012160A
CN113012160A CN202110201308.2A CN202110201308A CN113012160A CN 113012160 A CN113012160 A CN 113012160A CN 202110201308 A CN202110201308 A CN 202110201308A CN 113012160 A CN113012160 A CN 113012160A
Authority
CN
China
Prior art keywords
image
display image
transformed
main display
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110201308.2A
Other languages
Chinese (zh)
Inventor
张明
林枝叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110201308.2A priority Critical patent/CN113012160A/en
Publication of CN113012160A publication Critical patent/CN113012160A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the application discloses an image processing method, an image processing device, terminal equipment and a computer readable storage medium. The method is applied to the terminal equipment, and comprises the following steps: segmenting the first image to obtain a main display image and an auxiliary display image; acquiring pose information of the terminal equipment; respectively transforming the main display image and the auxiliary display image according to the pose information; and splicing the transformed main display image and the transformed auxiliary display image to obtain a target image. The image processing method, the image processing device, the terminal equipment and the computer readable storage medium can enrich the image processing modes and improve the interactivity between the user and the image.

Description

Image processing method, image processing device, terminal equipment and computer readable storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium.
Background
With the further development of technology, more and more terminal devices are equipped with powerful camera systems, which may include one or more different cameras, so as to meet the needs of users for shooting in different scenes. At present, the processing mode of the image shot by the terminal equipment is single, and the interactivity between the user and the image is poor.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, terminal equipment and a computer readable storage medium, which can enrich the image processing mode and improve the interactivity between a user and an image.
The embodiment of the application discloses an image processing method, which is applied to terminal equipment and comprises the following steps:
segmenting the first image to obtain a main display image and an auxiliary display image;
acquiring pose information of the terminal equipment;
respectively transforming the main display image and the auxiliary display image according to the pose information;
and splicing the transformed main display image and the transformed auxiliary display image to obtain a target image.
The embodiment of the application discloses an image processing device, is applied to terminal equipment, the device includes:
the image segmentation module is used for segmenting the first image to obtain a main display image and an auxiliary display image;
the pose acquisition module is used for acquiring pose information of the terminal equipment;
the transformation module is used for respectively transforming the main display image and the auxiliary display image according to the pose information;
and the splicing module is used for splicing the transformed main display image and the transformed auxiliary display image to obtain a target image.
The embodiment of the application discloses a terminal device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize the method.
An embodiment of the application discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method as described above.
According to the image processing method, the image processing device, the terminal device and the computer readable storage medium, the first image is segmented to obtain the main display image and the auxiliary display image, the pose information of the terminal device is obtained, the main display image and the auxiliary display image are respectively transformed according to the pose information, the transformed main display image and the transformed auxiliary display image are spliced to obtain the target image, the first image is segmented and then spliced to form the target image, and a user can transform the main display image and the auxiliary display image by changing the pose information of the terminal device, so that the target image formed by splicing is changed, the image processing mode can be enriched, and the interactivity between the user and the image is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a block diagram of image processing circuitry in one embodiment;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a diagram illustrating a main display image and a sub display image in one embodiment;
FIG. 4 is a flowchart of an image processing method in another embodiment;
FIG. 5A is a schematic view of an embodiment with an expanded sub-region;
FIG. 5B is a diagram illustrating a primary image area and a secondary image area in one embodiment;
FIG. 6 is a diagram illustrating intersection pixels of a secondary display image and a primary display image, and corner pixels of the secondary display image, in accordance with one embodiment;
FIG. 7 is a flow diagram of displaying a target image in one embodiment;
FIG. 8A is a diagram illustrating an interface for displaying a main display image according to an embodiment;
FIG. 8B is a diagram illustrating an interface for displaying a target image, according to one embodiment;
FIG. 9 is a block diagram of an image processing apparatus in one embodiment;
fig. 10 is a block diagram of a terminal device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the present application. The first image and the second image are both images, but they are not the same image.
The embodiment of the application provides a terminal device. The terminal device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 1 is a block diagram of an image processing circuit in one embodiment. For ease of illustration, FIG. 1 illustrates only aspects of image processing techniques related to embodiments of the present application.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the imaging device 110. The imaging device 110 may include one or more lenses 112 and an image sensor 114. Image sensor 114 may include an array of color filters (e.g., Bayer filters), and image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 140. The attitude sensor 120 (e.g., a three-axis gyroscope, hall sensor, accelerometer, etc.) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
It should be noted that, although only one imaging device 110 is shown in fig. 1, in the embodiment of the present application, at least two imaging devices 110 may be included, each imaging device 110 may respectively correspond to one image sensor 114, or a plurality of imaging devices 110 may correspond to one image sensor 114, which is not limited herein. The operation of each image forming apparatus 110 can refer to the above description.
In addition, the image sensor 114 may also transmit raw image data to the attitude sensor 120, the attitude sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives the processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by the ISP processor 140 may be sent to the control logic 150. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
The image processing method provided by the embodiment of the present application is exemplarily described with reference to the image processing circuit of fig. 1. The ISP processor 140 may obtain the first image from the imaging device 110 or the image memory 130, and segment the obtained first image to obtain a main display image and a secondary display image. The attitude sensor 120 can transmit the acquired data to the ISP processor 140, and the ISP processor 140 can acquire the attitude information of the terminal device according to the data acquired by the attitude sensor 120 and respectively transform the primary display image and the secondary display image according to the attitude information. The ISP processor 140 may splice the transformed main display image and the transformed sub display image to obtain a target image.
Alternatively, after obtaining the target image, the ISP processor 140 may send the target image to the display 160, and the display 160 displays the target image, so that the user can visually observe the target image through the display 160.
As shown in fig. 2, in one embodiment, an image processing method is provided, which can be applied to the above terminal device, which may include, but is not limited to, a mobile phone, a smart wearable device, a tablet computer, and the like. The image processing method may include the steps of:
step 210, segmenting the first image to obtain a main display image and an auxiliary display image.
In some embodiments, the first image may be an image captured by a camera having a large Field of View (FOV) that may be used to characterize the Field of View of the image that the camera may receive, and the larger the Field of View of the camera, the larger the corresponding Field of View may be. Alternatively, the first image may be an image captured by a camera having a field angle greater than an angle threshold, which may include, but is not limited to, 55 degrees, 60 degrees, and the like.
As a specific embodiment, the first image may include at least one of a wide-angle image captured by a wide-angle camera, which may have a focal length of typically 24-38 mm, a field angle of typically 60-84 degrees, a focal length of typically 13-20 mm, and a field angle of typically 94-118 degrees, and an ultra-wide-angle image captured by an ultra-wide-angle camera. It should be noted that the wide-angle camera and the super-wide-angle camera may also be cameras of other models and parameters, and the first image may also be acquired by other cameras with a larger field angle, which is not limited herein.
The terminal device may divide the first image into a main display image and a sub display image, where the main display image may be an image of a fixed area in the first image, for example, the main display image may be an image of a central area of the first image, and the sub display image may be an image of a peripheral area of the main display image in the first image. Optionally, the terminal device may obtain one main display image and a plurality of sub display images by dividing the first image. The main display image may be a square or rectangular image, and the sub display image may be a rectangle, trapezoid, or the like. The plurality of secondary display images can respectively correspond to one image side of the main display image, and if the first image has an image of a surrounding area on any image side of the main display image, the secondary display image corresponding to the image side can be divided.
FIG. 3 is a diagram illustrating a main display image and a sub display image according to an embodiment. As shown in fig. 3, the terminal device may segment the first image to obtain a main display image 310, a sub display image 302, a sub display image 304, a sub display image 306, and a sub display image 308. The secondary display image 302, the secondary display image 304, the secondary display image 306, and the secondary display image 308 may respectively correspond to 4 image sides of the primary display image 310, wherein the secondary display image 302 may include an image of a surrounding area of the first image on the upper side of the primary display image 310, the secondary display image 304 may include an image of a surrounding area of the first image on the left side of the primary display image 310, the secondary display image 306 may include an image of a surrounding area of the first image on the lower side of the primary display image 310, the secondary display image 308 may include an image of a surrounding area of the first image on the right side of the primary display image 310, and so on. It should be noted that fig. 3 shows only one division method of the main display image and the sub display image, and may be divided in another method (for example, division of another shape, another image region, or the like), which is not limited herein.
And step 220, acquiring pose information of the terminal equipment.
The pose information may include position and/or attitude information of the terminal device, and one or more pose sensors, such as an IMU (Inertial Measurement Unit) sensor, an acceleration sensor, and the like, may be provided in the terminal device. The pose information may be position and/or posture data of the terminal device at the current time, or change data of the position and/or posture of the terminal device at the current time, and the like. Data can be acquired through the pose sensor, and pose information of the terminal equipment can be acquired according to the data acquired by the pose sensor.
As an optional implementation manner, the pose information acquired by the terminal device may be position and/or pose information of the terminal device relative to an initial time, and the initial time may be set according to an actual requirement, for example, the initial time may be a time at which the terminal device acquires a first image through a camera, or a time at which the terminal device initializes a displayed interface, and optionally, after the displayed interface is initialized, the terminal device may display a divided main display image, adjust the displayed main display image by using the pose information of the terminal device, and finally display the obtained target image.
The terminal equipment can reset the pose sensor at the initial moment, and the pose sensor can accumulate position change and/or pose change and the like of the terminal equipment from the initial moment to obtain pose information of the terminal equipment in real time. For example, taking the pose sensor as an IMU sensor as an example, the IMU may measure the angular velocity and the acceleration of the terminal device in the three-dimensional space from an initial time, further solve the three-dimensional pose and the displacement change of the terminal device, and accumulate the three-dimensional pose and the displacement change of the terminal device, thereby obtaining the pose information of the terminal device in real time.
And 230, respectively transforming the main display image and the auxiliary display image according to the pose information.
In the embodiment of the present application, the transformation of the primary display image and the secondary display image may be an image geometric transformation, which may include one or more of a translation transformation, a scaling transformation, a rotation transformation, an offset transformation, and the like, but is not limited thereto. The terminal equipment can determine a transformation matrix corresponding to the main display image according to the pose information, and transforms the main display image by using the transformation matrix to obtain a transformed main display image. Optionally, the transformation of the main display image may be matched with pose information of the terminal device, the main display image may be subjected to translation transformation and/or scaling transformation according to the position information of the terminal device, and the main display image may be subjected to rotation transformation and/or offset transformation according to the pose information of the terminal device, so that the transformation of the main display image conforms to the current pose information of the terminal device.
After the main display image is transformed, the terminal device can determine a transformation matrix corresponding to the secondary display image based on the transformed main display image, and then transform the secondary display image according to the transformation matrix corresponding to the secondary display image. Because the pixel point of the intersection of the auxiliary display image and the main display image can be transformed along with the transformation of the main display image, the transformation matrix corresponding to the auxiliary display image can be determined by utilizing the pixel point of the intersection of the auxiliary display image and the main display image, so that the pixel point of the intersection of the auxiliary display image and the main display image is still the same after transformation, and the transformation of the auxiliary display image and the main display image can be better spliced.
And step 240, splicing the transformed main display image and the transformed auxiliary display image to obtain a target image.
In some embodiments, the terminal device may register the transformed main display image and the transformed sub display image to obtain matched pixel points in the transformed main display image and the transformed sub display image, and fuse the matched pixel points in the transformed main display image and the transformed sub display image, and unmatched pixel points may be directly spliced and combined, so that the transformed main display image and the transformed sub display image are spliced into a seamless target image.
Optionally, the terminal device may display the first image, and display the target image after the target image is obtained by stitching, so that a user may visually observe a change of the first image. In some embodiments, the first image may be an image acquired by a camera with a large FOV, and the first image is divided into a primary display image and a secondary display image, the primary display image and the secondary display image are transformed and adjusted by using the pose information of the terminal device, and then a new target image is formed by splicing, so that the display content of the image with the large FOV can be adjusted by changing the pose of the terminal device, the interactivity is improved, and the immersion of a user can be improved.
In the embodiment of the application, the first image is segmented to obtain the main display image and the auxiliary display image, the pose information of the terminal device is obtained, the main display image and the auxiliary display image are respectively transformed according to the pose information, the transformed main display image and the transformed auxiliary display image are spliced to obtain the target image, the first image is segmented and spliced to form the target image, and a user can transform the main display image and the auxiliary display image by changing the pose information of the terminal device, so that the target image formed by splicing is changed, the processing mode of the image can be enriched, and the interactivity between the user and the image is improved.
As shown in fig. 4, in an embodiment, another image processing method is provided, which is applicable to the terminal device described above, and the method may include the following steps:
step 402, aligning the first image with the second image, and determining an image area in the first image, which is matched with the second image.
In one embodiment, at least a first camera and a second camera may be disposed on the terminal device, wherein the field angle of the first camera may be larger than that of the second camera, for example, the first camera may be a wide-angle camera or an ultra-wide-angle camera, and the second camera may be a main camera (i.e., a standard camera). The terminal device can control the first camera and the second camera to simultaneously acquire images, the first image can be an image acquired by the terminal device through the first camera at a first moment, the second image can be an image acquired by the terminal device through the second camera at the first moment, and the first image and the second image are images respectively acquired by the first camera and the second camera at the same moment.
In some embodiments, before the terminal device acquires the first image and the second image through the first camera and the second camera, the terminal device may further calibrate the first camera and the second camera through a camera calibration algorithm, correct the first camera and the second camera, and calibrate internal parameters, external parameters, and the like of the first camera and the second camera. The internal parameters refer to parameters inside the camera and may include distortion parameters, lens optical axis offset parameters, focal length, and the like, and the external parameters may be used to describe parameters of the camera in a world coordinate system, such as a position and a rotation direction of the camera, and a positional relationship between the camera and other cameras.
The terminal equipment can perform distortion correction and three-dimensional correction on the first camera according to the calibrated internal parameters, external parameters and the like of the first camera, and perform distortion correction on the second camera according to the calibrated internal parameters, external parameters and the like of the second camera, so that normal first images and second images can be obtained, and the accuracy of alignment of the subsequent first images and the second images is improved.
The terminal device may align the first image with the second image, and the manner of aligning the first image with the second image may include, but is not limited to, the following manners:
the method comprises the steps of respectively extracting characteristic points in a first image and a second image, matching the characteristic points of the first image with the characteristic points of the second image, and determining an image area matched with the second image in the first image according to the matched characteristic point pairs in the first image and the second image. Alternatively, SIFT (Scale-invariant feature transform) feature descriptors, ORB (Oriented Fast and Rotated binary features) feature descriptors, and the like in the first image and the second image may be extracted, but not limited thereto. Furthermore, matched feature point pairs can be screened by using algorithms such as RANSAC (Random sample consensus), wrong matched feature points can be filtered, and the accuracy of image alignment can be improved.
And secondly, the second image can be reduced according to a certain reduction proportion, the reduced second image is used as an image searching template, the image searching template is simulated to move on the first image, the sub-image blocks covered by the image searching template in the first image can be compared with the image searching template, the similarity between the image searching template and the covered sub-image blocks is calculated, and when the similarity is greater than a preset similarity threshold, the sub-image blocks covered by the image searching template in the first image can be used as image areas matched with the second image in the first image.
In one embodiment, the above-mentioned reduction ratio can be determined according to a field angle ratio between the first camera and the second camera. Further, the reduction ratio can be in positive correlation with the field angle ratio between the first camera and the second camera. The larger the ratio of the field angles between the first camera and the second camera is, the larger the difference between the field angle of the first camera and the field angle of the second camera is, the larger the reduction ratio can be set, and the size change of the reduced second image relative to the original second image is larger. The smaller the ratio of the field angles between the first camera and the second camera is, the smaller the difference between the field angle of the first camera and the field angle of the second camera is, the smaller the reduction ratio can be set, and the reduced second image is closer to the original second image.
Alternatively, the second image may also be a preset reference image, or an image captured by another camera, and the like, and is not limited to the image captured by the main camera of the terminal device.
Step 404, determining a primary display image area and a secondary display image area in the first image according to the matched image area.
As an alternative implementation, the terminal device may directly use an image area in the first image that matches the second image as a primary display image area, and use other image areas in the first image except the primary display image area as secondary display image areas.
As another optional implementation, since the image area in the first image that matches the second image may be an irregularly-shaped image area, the terminal device may search for an inscribed quadrangle of the matched image area, and use a vertex of the inscribed quadrangle as a vertex of the main display image area to obtain the main display image area. The inscribed quadrilateral may include, but is not limited to, a square, a rectangle, and the like. The image area corresponding to the inscribed quadrangle can be used as the main display image area. By searching the inscribed quadrangle of the matched image area and determining the main display image area based on the inscribed quadrangle, the invalid image area can be prevented from appearing in the subsequent image segmentation, the image processing efficiency is improved, and the image splicing effect is further ensured.
After determining the main display image area in the first image, the terminal device may divide the other image areas except the main display image area in the first image into a plurality of sub-areas, and determine the sub-display image area corresponding to each sub-area according to the plurality of sub-areas. The other image regions may be divided into a plurality of sub-regions corresponding to the respective image sides according to the respective image sides of the main display image region, so that a plurality of sub-display image regions corresponding to each image side of the main display image region may be determined.
In some embodiments, determining a sub-display image region corresponding to each sub-region from a plurality of sub-regions may include: and respectively expanding the plurality of sub-regions according to the expansion ratio to obtain a sub-display image region corresponding to each sub-region. The expansion ratio may be set according to actual requirements, for example, 10%, 20%, 15%, 18%, etc., and is not limited herein. The expansion of the sub-region according to the expansion ratio can mean that the edge where the sub-region and the main display image region are intersected is expanded towards the main display image region, so that the expanded sub-display image region and the main display image region have overlapped image regions, the splicing effect of subsequent image splicing can be improved, and a seamless target image can be obtained.
FIG. 5A is a schematic diagram illustrating expansion of sub-regions in one embodiment. As shown in fig. 5A, after determining the main display image area 510 in the first image, the terminal device may divide other image areas outside the main display image area 510 to obtain a plurality of sub-areas, where only one sub-area 502 is shown in fig. 5A, the sub-area 502 may be expanded according to an expansion ratio, and a side where the sub-area 502 and the main display image area 510 overlap may be expanded toward the main display image area 510 to obtain a sub-display image area 520 corresponding to the sub-area 502. There is an overlap region between the sub-display image region 520 and the main display image region 510.
Fig. 5B is a schematic diagram of a main display image area and a sub display image area in one embodiment. As shown in fig. 5B, after determining the primary display image region 510 in the first image, the terminal device may expand each of the divided sub-regions based on the expansion ratio to obtain a secondary display image region 520, a secondary display image region 530, a secondary display image region 540, and a secondary display image region 550.
And 406, segmenting the first image according to the main display image area and the auxiliary display image area to obtain a main display image and an auxiliary display image.
The terminal device can divide the first image, can take the main display image area of the first image as a main display image, and take each auxiliary display image area in the first image as an auxiliary display image, so that one main display image and a plurality of auxiliary display images can be obtained. Taking fig. 5B as an example, the terminal device may be divided according to the main display image area 510, the sub display image area 520, the sub display image area 530, the sub display image area 540, and the sub display image area 550 in fig. 5B, so that the main display image area 510 may be used as a main display image, and the sub display image area 520, the sub display image area 530, the sub display image area 540, and the sub display image area 550 may be respectively used as a sub display image.
In other embodiments, a fixed resolution image may be directly selected from the first image as the main display image, and the terminal device may select an image area with a fixed size and an image position from the first image as the main display image.
And step 408, acquiring pose information of the terminal equipment.
In some embodiments, the terminal device may measure an angular velocity and an acceleration at the current time through the pose sensor, where the angular acceleration refers to an angular change velocity when the terminal device performs a posture change, and the acceleration may include a translational acceleration, that is, an acceleration when the terminal device performs a translation. The translation amount of the terminal equipment is calculated by using the measured acceleration based on a sensor model of the pose sensor, and a translation matrix is constructed according to the translation amount; quaternions are calculated using the measured angular velocities, and a rotation matrix is constructed from the quaternions, which can be used to describe the orientation (i.e., pose) of the terminal device in three-dimensional space. The constructed translation matrix and rotation matrix can be used as pose information of the terminal equipment.
As a specific implementation manner, the translation amount of the terminal device may be calculated according to formula (1) and formula (2):
Figure BDA0002949064830000121
vj{x,y,z}=vi{x,y,z}+a{x,y,z}Δ t equation (2);
wherein the content of the first and second substances,
Figure BDA0002949064830000122
the displacement of the terminal equipment under the coordinate system of the attitude sensor at the moment i can be represented;
Figure BDA0002949064830000123
the displacement sensor can be used for indicating j moment, and the terminal equipment displaces under a coordinate system of the attitude sensor; a is{x,y,z}The acceleration of the terminal equipment under the coordinate system of the attitude sensor can be represented; Δ t may be used to represent the time difference from time i to time j; v. ofi{x,y,z}The method can be used for representing the displacement speed of the terminal equipment under the coordinate system of the position and orientation sensor at the moment i. v. ofj{x,y,z}Can be used to indicate that the terminal device is at time jAnd (5) displacement speed of the pose sensor in a coordinate system.
Calculating the translation amounts of the terminal equipment on the X axis, the Y axis and the Z axis under the coordinate system of the attitude and position sensor at the moment j by using the formula (1), wherein the translation amount on the X axis can be
Figure BDA0002949064830000124
The amount of translation in the Y axis may be
Figure BDA0002949064830000131
The amount of translation in the Z axis may be
Figure BDA0002949064830000132
It should be noted that, the above-mentioned time j may refer to a current time, the time i may refer to a time immediately before the current time, the previous time may be determined by the frame rate of the pose sensor, and the time difference Δ t from the time i to the time j may be a value obtained by dividing 1 by the frame rate.
After the translation amounts of the terminal device in the X axis, the Y axis and the Z axis under the coordinate system of the attitude sensor are obtained by calculation, a translation matrix t (p) can be obtained according to the translation amount construction:
Figure BDA0002949064830000133
wherein the content of the first and second substances,
Figure BDA0002949064830000134
the translation amount of the terminal equipment on the X axis under the coordinate system of the position and orientation sensor at the moment i,
Figure BDA0002949064830000135
the translation amount of the terminal equipment on the Y axis under the coordinate system of the position and orientation sensor at the moment i,
Figure BDA0002949064830000136
and the translation amount of the terminal equipment on the Z axis under the coordinate system of the position and orientation sensor at the moment i.
As a specific embodiment, the quaternion can be calculated according to formula (3):
Figure BDA0002949064830000137
wherein the content of the first and second substances,
Figure BDA0002949064830000138
the quaternion calculated at time i is shown, the quaternion calculated at time j is shown, ω is the angular velocity, and δ is the euler angle. In addition, the quaternion q ═ w + xk1+yk2+zk3Wherein k is1、k2、k3Is an imaginary part, w is a constant parameter, and X, Y and Z are coordinates of an X axis, a Y axis and a Z axis of the terminal equipment under a coordinate system of the attitude sensor.
The rotation matrix r (p) can be obtained from the calculated quaternion:
Figure BDA0002949064830000139
and step 410, transforming the main display image according to the pose information.
The terminal equipment can transform the main display image by utilizing the translation matrix and the rotation matrix obtained by calculation, can firstly perform translation transformation on the main display image by utilizing the translation matrix, and then performs rotation transformation on the main display image by utilizing the rotation matrix. In one embodiment, a transformation matrix P ' corresponding to the main display image may be determined according to the calculated translation matrix and rotation matrix, where R is the rotation matrix, T is the translation matrix, and the matrix P is a homogeneous coordinate matrix of the main display image, and the matrix P is translated and rotated to obtain a transformation matrix P ', and the transformation matrix P ' is used to transform the main display image to obtain a transformed main display image.
And step 412, determining a transformation matrix corresponding to the secondary display image based on the transformed primary display image, and transforming the secondary display image according to the transformation matrix.
Since the secondary display image contains the image content of the first image in the image area around the primary display image, there are intersecting pixel points between the secondary display image and the primary display image, and the intersecting pixel points can be understood as pixel points existing on both the image sides of the secondary display image and the primary display image. The intersection pixel points of the secondary display image and the primary display image can be transformed along with the transformation of the primary display image, the corner pixel points on the outer boundary of the secondary display image cannot be transformed, and the corner pixel points on the outer boundary of the secondary display image can be understood as corner pixel points on the side which does not intersect with the primary display image.
FIG. 6 is a diagram illustrating intersection pixels of a secondary display image and a primary display image and corner pixels of the secondary display image in one embodiment. As shown in fig. 6, p1 and p2 are the intersecting pixels of the secondary display image 520 and the primary display image 510, p3 and p4 are the corner pixels on the outer boundary of the secondary display image, p1 and p2 are transformed with the transformation of the primary display image 510, and p3 and p4 are kept unchanged.
Therefore, the terminal equipment can determine the crossed pixel points of the secondary display image and the main display image, the corresponding transformed pixel points in the transformed main display image, and the transformation matrix corresponding to the secondary display image is calculated according to the corner pixel points of the secondary display image, the crossed pixel points and the corresponding transformed pixel points.
Specifically, the terminal device may determine the intersecting pixel points of the secondary display image and the primary display image according to the coordinates of the pixel points included on each image side of the secondary display image in the first image and the coordinates of the pixel points included on each image side of the primary display image in the first image, so as to obtain the pixel points which exist on both the image side of the secondary display image and the image side of the primary display image and have the same coordinates, that is, the intersecting pixel points. And calculating a transformation pixel point corresponding to the intersection pixel point of the secondary display image and the main display image based on the transformation matrix (namely the transformation matrix P') corresponding to the main display image, wherein the transformation pixel point is the corresponding transformation pixel point of the intersection pixel point in the transformed main display image. And calculating to obtain a homography transformation matrix corresponding to the secondary display image by utilizing the transformation pixel points corresponding to the intersected pixel points and the corner pixel points which are not transformed in the secondary display image.
As illustrated in fig. 6, the terminal device may calculate the transform pixel points p1 'and p 2', and p3 and p4 corresponding to p1 and p2 of the secondary display image 520 by using the transform matrix corresponding to the primary display image 510, and may calculate the homography transform matrix corresponding to the secondary display image 520 by using the pixel values of (p1, p1 '), (p2, p 2'), p3, and p 4.
After the transformation matrix corresponding to the secondary display image is obtained through calculation, each pixel point in the secondary display image can be transformed according to the transformation matrix corresponding to the secondary display image, and the transformed secondary display image is obtained. When there are a plurality of sub-display images, the transformation matrix corresponding to each sub-display image may be calculated respectively in the manner described in the above embodiment, and each sub-display image may be transformed according to the transformation matrix corresponding to each sub-display image, so as to obtain a plurality of transformed sub-display images.
And step 414, splicing the transformed main display image and the transformed auxiliary display image to obtain a target image.
The terminal equipment can splice the transformed main display image and the transformed multiple auxiliary display images to obtain a target image. For each transformed secondary display image, the transformed primary display image and the transformed secondary display image may be aligned and an overlap region between the transformed primary display image and the transformed secondary display image may be calculated. It should be noted that, the manner of aligning the transformed main display image and the transformed sub display image may adopt the manner of aligning the first image and the second image in the above embodiment, and details of the manner of aligning the transformed main display image and the transformed sub display image are not repeated here.
In some embodiments, since the divided sub-regions are enlarged according to the expansion ratio when determining the sub-image region corresponding to the sub-image, there is an overlapping region between the divided sub-image and the main image. After the secondary display image is transformed, the secondary display image and the primary display image respectively correspond to different transformation matrices, so that the transformed primary display image and the transformed secondary display image need to be aligned again to obtain an image area matched between the transformed primary display image and the transformed secondary display image, and the matched image area is an overlapping area.
In some embodiments, after the sub-region is expanded to obtain the sub-display image region corresponding to the sub-region, four corner pixel points of the overlapping region between the sub-display image region and the main display image region may be recorded, and the four corner pixel points of the overlapping region are transformed by using the transformation matrix of the main display image to obtain four transformation pixel points of the four corner pixel points in the transformed main display image, so as to determine the overlapping region between the transformed main display image and the transformed sub-display image according to the four transformation pixel points.
The terminal equipment can fuse the overlapped area between the transformed main display image and the transformed auxiliary display image and splice to obtain the target image. The method of fusing the overlapped regions between the transformed main display image and each transformed sub display image may include, but is not limited to, average pixel value fusion, weighted average pixel value fusion, non-linear method based fusion, color space fusion, and the like. The non-overlapping areas of the transformed main display image and the transformed auxiliary display images can be directly spliced to form a seamless target image.
In the embodiment of the application, the position and pose information of the terminal equipment is used for transforming the main display image, so that the main display image is transformed along with the change of the position and pose information, the cooperative transformation is kept, the interaction effect is further improved, the secondary display image is transformed based on the transformed main display image, the image effect after the secondary display image is transformed is ensured, the image splicing accuracy is improved, the image processing mode is enriched, the interactivity between a user and an image is improved, and the image visual effect of a target image is improved.
As shown in fig. 7, in an embodiment, the image processing method may further include the steps of:
step 702, displaying the target image.
The terminal device can initialize the display interface, display the main display image after the main display image is obtained through segmentation, and optionally, can directly display the image collected by the second camera after the display interface is initialized. The terminal equipment utilizes the pose information to transform the main display image and the auxiliary display image, can display the transformed main display image on the display interface, and can display the target image after splicing the transformed main display image and the auxiliary display image to obtain the target image, so that a user can directly observe the change of the main display image, and the interactivity is improved.
FIG. 8A is a diagram illustrating an interface for displaying a main display image according to an embodiment. As shown in fig. 8A, the display interface 810 displays the segmented main display image, and after the terminal device transforms the main display image by using the pose information, the transformed main display image may be displayed on the display interface 820.
FIG. 8B is a diagram illustrating an interface for displaying a target image, according to an embodiment. As shown in fig. 8B, after the primary display image and the secondary display image are transformed, the transformed primary display image and the transformed secondary display image may be spliced to obtain the target image. Since the FOV of the first image is larger, the FOV of the spliced target image is also larger, and it can be visually seen from fig. 8B that the target image can display the image content of the larger FOV relative to the image (i.e., the main display image) captured by the main camera, so that the immersion feeling of the user can be improved.
And step 704, when the pose information of the terminal equipment is detected to be changed, obtaining the pose change information of the terminal equipment.
And 706, transforming the main display image and the auxiliary display image again according to the pose change information, splicing the transformed main display image and the transformed auxiliary display image to obtain an updated target image, and displaying the updated target image.
When detecting that the pose information changes, the terminal equipment can acquire the pose change information of the terminal equipment, the pose change information can be used for representing the changed position and pose information of the terminal equipment, and the target image is updated and displayed according to the pose change information. The main display image and the auxiliary display image can be transformed again according to the pose change information, the transformation of the main display image can be consistent with the pose change information, for example, when the terminal equipment moves leftwards, the main display image can be translated leftwards and transformed, and when the terminal equipment rotates rightwards, the main display image can rotate rightwards to change the image visual angle, so that the main display image can simulate the movement of the terminal equipment to carry out visual angle switching. Alternatively, a transformation rule of the main display image may be set, for example, when the terminal device moves in a direction toward the user, the main display image may be transformed in an enlarged manner, and when the terminal device moves in a direction away from the user, the main display image may be transformed in a reduced manner, and the like, which is not limited herein.
The terminal equipment can splice the re-transformed main display image and the transformed auxiliary display image to obtain an updated target image, and then the updated target image is displayed, so that the effect of updating and displaying the target image based on the pose information in real time is achieved, a user can visually see different target images in a display interface by changing the pose of the terminal equipment, and the interactive feeling between the user and the images is improved.
In the embodiment of the application, the terminal device can display the target image through the display interface and update and display the target image according to the pose of the terminal device in real time, and a user can visually see different target images in the display interface by changing the pose of the terminal device, so that the interactive feeling between the user and the images is improved.
As shown in fig. 9, in an embodiment, an image processing apparatus 900 is provided, which can be applied to the terminal device described above, and the image processing apparatus 900 can include an image segmentation module 910, a pose acquisition module 920, a transformation module 930, and a stitching module 940.
The image segmentation module 910 is configured to segment the first image to obtain a main display image and an auxiliary display image.
In one embodiment, the first image includes at least one of a wide-angle image captured by a wide-angle camera and a super-wide-angle image captured by a super-wide-angle camera.
And a pose obtaining module 920, configured to obtain pose information of the terminal device.
And a transformation module 930, configured to transform the main display image and the sub display image according to the pose information.
And a splicing module 940, configured to splice the transformed main display image and the transformed sub display image to obtain a target image.
In the embodiment of the application, the first image is segmented to obtain the main display image and the auxiliary display image, the pose information of the terminal device is obtained, the main display image and the auxiliary display image are respectively transformed according to the pose information, the transformed main display image and the transformed auxiliary display image are spliced to obtain the target image, the first image is segmented and spliced to form the target image, and a user can transform the main display image and the auxiliary display image by changing the pose information of the terminal device, so that the target image formed by splicing is changed, the processing mode of the image can be enriched, and the interactivity between the user and the image is improved.
In one embodiment, the image segmentation module 910 includes an alignment unit, a region determination unit, and a segmentation unit.
The alignment unit is used for aligning a first image and a second image and determining an image area matched with the second image in the first image, wherein the first image is an image acquired by the terminal equipment through a first camera at a first moment, the second image is an image acquired by the terminal equipment through a second camera at the first moment, and the field angle of the first camera is larger than that of the second camera.
And the area determining unit is used for determining the main display image area and the auxiliary display image area in the first image according to the matched image areas.
In an embodiment, the area determining unit is further configured to search an inscribed quadrangle of the matched image area, use a vertex of the inscribed quadrangle as a vertex of the main display image area to obtain the main display image area, divide the other image areas except the main display image area in the first image into a plurality of sub-areas, and determine the sub-display image area corresponding to each sub-area according to the plurality of sub-areas.
In an embodiment, the area determining unit is further configured to expand the plurality of sub-areas according to the expansion ratio, so as to obtain a sub-display image area corresponding to each sub-area.
And the dividing unit is used for dividing the first image according to the main display image area and the auxiliary display image area to obtain a main display image and an auxiliary display image.
In one embodiment, the transform module 930 includes a first transform unit and a second transform unit.
And the first transformation unit is used for transforming the main display image according to the pose information.
And the second transformation unit is used for determining a transformation matrix corresponding to the secondary display image based on the transformed main display image and transforming the secondary display image according to the transformation matrix.
In an embodiment, the second transforming unit is further configured to determine intersecting pixel points of the secondary display image and the primary display image, transform pixel points corresponding to the transformed primary display image, and calculate a transformation matrix corresponding to the secondary display image according to corner pixel points of the secondary display image, the intersecting pixel points, and the corresponding transform pixel points.
In an embodiment, the splicing module 940 is further configured to align the transformed main display image and the transformed sub display image, calculate an overlapping area between the transformed main display image and the transformed sub display image, and fuse the overlapping area between the transformed main display image and the transformed sub display image to obtain the target image by splicing.
In the embodiment of the application, the position and pose information of the terminal equipment is used for transforming the main display image, so that the main display image is transformed along with the change of the position and pose information, the cooperative transformation is kept, the interaction effect is further improved, the secondary display image is transformed based on the transformed main display image, the image effect after the secondary display image is transformed is ensured, the image splicing accuracy is improved, the image processing mode is enriched, the interactivity between a user and an image is improved, and the image visual effect of a target image is improved.
In one embodiment, the image processing apparatus 900 further includes a display module in addition to the image segmentation module 910, the pose acquisition module 920, the transformation module 930, and the stitching module 940.
And the display module is used for displaying the target image.
The pose obtaining module 920 is further configured to obtain pose change information of the terminal device when it is detected that the pose information of the terminal device changes.
The transformation module 930 is further configured to transform the primary display image and the secondary display image again according to the pose change information.
The splicing module 940 is further configured to splice the re-transformed main display image and the transformed sub-display image to obtain an updated target image.
And the display module is also used for displaying the updated target image.
In the embodiment of the application, the terminal device can display the target image through the display interface and update and display the target image according to the pose of the terminal device in real time, and a user can visually see different target images in the display interface by changing the pose of the terminal device, so that the interactive feeling between the user and the images is improved.
Fig. 10 is a block diagram of a terminal device in one embodiment. As shown in fig. 10, terminal device 1000 can include one or more of the following: a processor 1010, a memory 1020 coupled to the processor 1010, wherein the memory 1020 may store one or more computer programs that may be configured to be executed by the one or more processors 1010 to implement the methods as described in the various embodiments above.
Processor 1010 may include one or more processing cores. The processor 1010 connects various parts within the overall terminal device 1000 using various interfaces and lines, and performs various functions of the terminal device 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1020 and calling data stored in the memory 1020. Alternatively, the processor 1010 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1010 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1010, but may be implemented by a communication chip.
The Memory 1020 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory 1020 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1020 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The storage data area may also store data created by terminal device 1000 in use, and the like.
It is understood that the terminal device 1000 may include more or less structural elements than those shown in the above structural block diagrams, for example, a power module, a physical button, a WiFi (Wireless Fidelity) module, a speaker, a bluetooth module, a sensor, etc., and is not limited herein.
The embodiment of the application discloses a computer readable storage medium, which stores a computer program, wherein the computer program realizes the method described in the above embodiment when being executed by a processor.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program, when executed by a processor, implements the method as described in the embodiments above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a ROM, etc.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM can take many forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus Direct RAM (RDRAM), and Direct Rambus DRAM (DRDRAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
The foregoing detailed description has provided a detailed description of an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium, which are disclosed in the embodiments of the present application, and specific examples are applied herein to illustrate the principles and implementations of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. An image processing method is applied to a terminal device, and the method comprises the following steps:
segmenting the first image to obtain a main display image and an auxiliary display image;
acquiring pose information of the terminal equipment;
respectively transforming the main display image and the auxiliary display image according to the pose information;
and splicing the transformed main display image and the transformed auxiliary display image to obtain a target image.
2. The method of claim 1, wherein the first image comprises at least one of a wide-angle image captured by a wide-angle camera and an ultra-wide-angle image captured by an ultra-wide-angle camera.
3. The method of claim 1, wherein the segmenting the first image to obtain the main display image and the sub display image comprises:
aligning a first image and a second image, and determining an image area in the first image, which is matched with the second image, wherein the first image is an image acquired by the terminal equipment through a first camera at a first moment, the second image is an image acquired by the terminal equipment through a second camera at the first moment, and the field angle of the first camera is larger than that of the second camera;
determining a main display image area and an auxiliary display image area in the first image according to the matched image area;
and segmenting the first image according to the main display image area and the auxiliary display image area to obtain a main display image and an auxiliary display image.
4. The method of claim 3, wherein determining the primary image region and the secondary image region in the first image according to the matched image region comprises:
searching an inscribed quadrangle of the matched image area, and taking the vertex of the inscribed quadrangle as the vertex of a main display image area to obtain the main display image area;
and dividing other image areas except the main display image area in the first image into a plurality of sub-areas, and determining a sub-display image area corresponding to each sub-area according to the plurality of sub-areas.
5. The method of claim 4, wherein determining the sub-display image region corresponding to each sub-region from the plurality of sub-regions comprises:
and respectively expanding the plurality of sub-areas according to the expansion ratio to obtain a sub-display image area corresponding to each sub-area.
6. The method according to any one of claims 1 to 5, wherein the transforming the primary display image and the secondary display image according to the pose information respectively comprises:
transforming the main display image according to the pose information;
and determining a transformation matrix corresponding to the secondary display image based on the transformed primary display image, and transforming the secondary display image according to the transformation matrix.
7. The method of claim 6, wherein determining a transformation matrix corresponding to the secondary display image based on the transformed primary display image comprises:
determining crossed pixel points in the secondary display image and the main display image and corresponding transformation pixel points in the transformed main display image;
and calculating a transformation matrix corresponding to the secondary display image according to the corner pixel points of the secondary display image and the intersection pixel points and the corresponding transformation pixel points.
8. The method according to any one of claims 1 to 5, wherein the splicing the transformed main display image and the transformed sub display image to obtain the target image comprises:
aligning the transformed main display image with the transformed sub display image, and calculating an overlapping area between the transformed main display image and the transformed sub display image;
and fusing the overlapped area between the transformed main display image and the transformed auxiliary display image, and splicing to obtain a target image.
9. The method of any of claims 1 to 5, further comprising:
displaying the target image;
when the pose information of the terminal equipment is detected to be changed, the pose change information of the terminal equipment is obtained;
and transforming the main display image and the auxiliary display image again according to the pose change information, splicing the transformed main display image and the transformed auxiliary display image to obtain an updated target image, and displaying the updated target image.
10. An image processing apparatus, applied to a terminal device, the apparatus comprising:
the image segmentation module is used for segmenting the first image to obtain a main display image and an auxiliary display image;
the pose acquisition module is used for acquiring pose information of the terminal equipment;
the transformation module is used for respectively transforming the main display image and the auxiliary display image according to the pose information;
and the splicing module is used for splicing the transformed main display image and the transformed auxiliary display image to obtain a target image.
11. A terminal device comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
CN202110201308.2A 2021-02-23 2021-02-23 Image processing method, image processing device, terminal equipment and computer readable storage medium Pending CN113012160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110201308.2A CN113012160A (en) 2021-02-23 2021-02-23 Image processing method, image processing device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110201308.2A CN113012160A (en) 2021-02-23 2021-02-23 Image processing method, image processing device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113012160A true CN113012160A (en) 2021-06-22

Family

ID=76407512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110201308.2A Pending CN113012160A (en) 2021-02-23 2021-02-23 Image processing method, image processing device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113012160A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162019A (en) * 2015-04-16 2016-11-23 上海机电工程研究所 Single immersion pseudo operation training visualization system and method for visualizing thereof
CN106855987A (en) * 2016-12-21 2017-06-16 北京存在主义科技有限公司 Sense of reality Fashion Show method and apparatus based on model prop
CN111402135A (en) * 2020-03-17 2020-07-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111814238A (en) * 2020-07-13 2020-10-23 郑州奥腾网络科技有限公司 BIM real-time imaging method for breeding house based on artificial intelligence and mixed cloud reasoning
CN112017137A (en) * 2020-08-19 2020-12-01 深圳市锐尔觅移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112116654A (en) * 2019-06-20 2020-12-22 杭州海康威视数字技术股份有限公司 Vehicle pose determining method and device and electronic equipment
CN112212873A (en) * 2019-07-09 2021-01-12 北京地平线机器人技术研发有限公司 High-precision map construction method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162019A (en) * 2015-04-16 2016-11-23 上海机电工程研究所 Single immersion pseudo operation training visualization system and method for visualizing thereof
CN106855987A (en) * 2016-12-21 2017-06-16 北京存在主义科技有限公司 Sense of reality Fashion Show method and apparatus based on model prop
CN112116654A (en) * 2019-06-20 2020-12-22 杭州海康威视数字技术股份有限公司 Vehicle pose determining method and device and electronic equipment
CN112212873A (en) * 2019-07-09 2021-01-12 北京地平线机器人技术研发有限公司 High-precision map construction method and device
CN111402135A (en) * 2020-03-17 2020-07-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111814238A (en) * 2020-07-13 2020-10-23 郑州奥腾网络科技有限公司 BIM real-time imaging method for breeding house based on artificial intelligence and mixed cloud reasoning
CN112017137A (en) * 2020-08-19 2020-12-01 深圳市锐尔觅移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US9516223B2 (en) Motion-based image stitching
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
CN109194876B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108600576B (en) Image processing apparatus, method and system, and computer-readable recording medium
WO2017088678A1 (en) Long-exposure panoramic image shooting apparatus and method
US20120306999A1 (en) Motion-Based Image Stitching
US10754420B2 (en) Method and device for displaying image based on virtual reality (VR) apparatus
WO2011049046A1 (en) Image processing device, image processing method, image processing program, and recording medium
JP2017208619A (en) Image processing apparatus, image processing method, program and imaging system
CN113875220B (en) Shooting anti-shake method, shooting anti-shake device, terminal and storage medium
CN114095662B (en) Shooting guide method and electronic equipment
CN110610531A (en) Image processing method, image processing apparatus, and recording medium
WO2022242395A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN107563959B (en) Panorama generation method and device
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN113875219B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111932587A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium
JP2020053774A (en) Imaging apparatus and image recording method
US20090059018A1 (en) Navigation assisted mosaic photography
CN113344789B (en) Image splicing method and device, electronic equipment and computer readable storage medium
CN114401362A (en) Image display method and device and electronic equipment
US20180376130A1 (en) Image processing apparatus, image processing method, and image processing system
CN113012160A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
JP6714819B2 (en) Image display system, information processing device, image display method, and image display program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination