CN108305218B - Panoramic image processing method, terminal and computer readable storage medium - Google Patents

Panoramic image processing method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN108305218B
CN108305218B CN201711490078.6A CN201711490078A CN108305218B CN 108305218 B CN108305218 B CN 108305218B CN 201711490078 A CN201711490078 A CN 201711490078A CN 108305218 B CN108305218 B CN 108305218B
Authority
CN
China
Prior art keywords
image
pixel points
panoramic image
pixel
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711490078.6A
Other languages
Chinese (zh)
Other versions
CN108305218A (en
Inventor
柴启蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shuike Culture Group Co ltd
Original Assignee
Zhejiang Shuike Culture Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shuike Culture Group Co ltd filed Critical Zhejiang Shuike Culture Group Co ltd
Priority to CN201711490078.6A priority Critical patent/CN108305218B/en
Publication of CN108305218A publication Critical patent/CN108305218A/en
Application granted granted Critical
Publication of CN108305218B publication Critical patent/CN108305218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a panoramic image processing method, a terminal and a computer readable storage medium, wherein the panoramic image processing method comprises the steps of determining an image deformation area in a panoramic image, wherein all pixel points on the boundary in the height direction of the image in the determined image deformation area are distributed in an inwards concave arc shape, namely, the image generating the inwards concave arc shape deformation is screened out firstly, then partial pixel points are deleted in the height direction of the image, all the pixel points above the deleted pixel points are sequentially moved downwards, the pixel points of the integrally inwards concave arc part are moved downwards, and the aim of compressing downwards in the height direction is fulfilled; by implementing the scheme, the concave arc deformation visual condition of the inner image content in the panoramic image is reduced, so that the processed panoramic image and the original image tend to be consistent, the effect of shooting and seeing is achieved as far as possible, and the user experience is improved.

Description

Panoramic image processing method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a panoramic image processing method, a terminal, and a computer-readable storage medium.
Background
With the development of intelligent terminal technology, the multimedia functions of the current terminals are increasingly enriched and enhanced to meet the requirements of users. For example, more and more terminals support panoramic image shooting, for example, most of the existing mobile phones support the function of shooting panoramic images, so that users can shoot more objects in one image, and brand new shooting experience is brought to the users.
For panoramic picture acquisition, a plurality of images are obtained by shooting in the panoramic shooting process through a camera, and then the shot images are synthesized to obtain the panoramic picture. In the current panoramic shooting process, a user generally holds a shooting terminal and rotates along with the moving direction prompt information of a shooting interface so as to capture an image needing to be shot. During shooting, a shot object (such as various objects) is not moved, and the shooting lens rotates along with a human body, so that the shot image has a visual sense of large and small distances due to the distance relationship between the lens and the shot object during moving shooting, for example, the shot image is shot in the middle particularly during rotation. Therefore, in the process of merging a plurality of shot images, in order to realize the coordination and unification of all shot objects in all the images, the images shot in the middle area are stretched up and down and compressed left and right, so that the images in the middle area in the obtained panoramic image can generate a visual sense of concave arc deformation, particularly for various square shot objects such as building landscapes, the visual sense of arc deformation is obvious, and the user experience satisfaction is poor.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the existing panoramic picture has concave arc-shaped deformation visual sense, and the user experience satisfaction is poor. In view of the technical problem, a panoramic image processing method, a terminal and a computer-readable storage medium are provided.
In order to solve the above technical problem, the present invention provides a panoramic image processing method, including the steps of:
determining an image deformation area in the panoramic image, wherein all pixel points on the boundary in the height direction of the image in the image deformation area are integrally distributed in an inwards concave arc shape;
and deleting partial pixel points in the height direction of the image, and sequentially moving down the pixel points above the deleted pixel points to obtain the processed panoramic image.
Optionally, the determining an image deformation region in the panoramic image includes:
acquiring preset area configuration information, wherein the area configuration information comprises position information of a middle area of a corresponding panoramic image;
and determining the middle area of the panoramic image as an image deformation area according to the area configuration information.
Optionally, the determining an image deformation region in the panoramic image includes:
comparing the images in each area of the panoramic image with corresponding images in a plurality of source images shot when the panoramic image is generated in a combined mode;
and determining an image deformation area in the panoramic image according to the comparison result.
Optionally, the deleting of the partial pixel points in the height direction of the image includes:
determining target pixel points at the arc tops among all the pixel points which are integrally distributed in the shape of the inwards concave arc;
and in the height direction of the image, deleting the pixel points by taking the column pixel points as units and taking the column where the target pixel point is positioned as a center according to a rule that the number of the pixel points is sequentially increased from two edges of the width direction of the image to the center.
Optionally, the step of deleting the pixels according to a rule that the number of the pixels increases gradually from the two edges of the image in the width direction to the center in the direction of the center includes:
when the number m of the pixel points needing to be deleted in a certain row is determined to be more than or equal to 2, deleting m continuous pixel points in the row, or deleting m pixel points in the row according to the interval number of preset pixel points.
Optionally, before deleting a part of the pixel points in the height direction of the image, the method further includes:
extracting boundary pixel points of the image in the image deformation area;
in the process of deleting partial pixel points in the height direction of the image, before deleting the determined pixel points, judging whether the pixel points are boundary pixel points, if so, keeping the boundary pixel points.
Optionally, after sequentially moving down the pixels above the pixel to be deleted, the method further includes:
and filling the blank pixel points left after the pixel points are moved downwards.
Optionally, the filling processing of the blank pixel points left after the pixel points are moved downwards includes:
acquiring pixel values of at least two pixel points adjacent to the blank pixel point;
estimating a target pixel value of the blank pixel point according to the pixel values of the at least two pixel points;
and setting the pixel value of the blank pixel point as the target pixel value.
Further, the invention also provides a terminal, which comprises a processor, a memory and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the panoramic image processing method as described above.
Further, the present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the panoramic image processing method as described above.
Advantageous effects
The panoramic image processing method comprises the steps of determining an image deformation area in a panoramic image, wherein all pixel points on the boundary of the determined image deformation area in the height direction are distributed in an inwards concave arc shape, namely, an image generating the inwards concave arc shape deformation is screened out firstly, then deleting partial pixel points in the height direction of the image, and downwards moving all the pixel points above the deleted pixel points in sequence, so that the pixel points of the integrally inwards concave arc part downwards move to achieve the aim of downwards compressing in the height direction, the inwards concave arc shape deformation vision is reduced, the processed panoramic image is enabled to be consistent with an original image, the effect of shooting the panoramic image is achieved as far as possible, and the user experience is improved.
Drawings
The invention will be further described with reference to the following drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of an alternative mobile terminal for implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a schematic flowchart of a panoramic image processing method according to a first embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a process of determining an image deformation area according to a first embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating another process of determining a deformed image area according to the first embodiment of the present invention;
FIG. 6 is a schematic diagram of a plurality of consecutive images according to a first embodiment of the present invention;
FIG. 7 is a schematic view of a panoramic image provided by the first embodiment of the present invention;
fig. 8-1 is a schematic view illustrating a pixel point of the first embodiment of the present invention is distributed in an inward concave arc shape;
fig. 8-2 is a schematic view illustrating another pixel point of the first embodiment of the present invention is distributed in an inward concave arc shape;
fig. 9 is a schematic diagram of the height direction and the width direction of an image in a panoramic image according to the first embodiment of the present invention;
fig. 10 is a schematic distribution diagram of a pixel point after being shifted down according to a first embodiment of the present invention;
fig. 11 is a schematic diagram of pixel point distribution according to a second embodiment of the present invention;
fig. 12 is a schematic diagram illustrating a pixel confirmation process according to a second embodiment of the present invention;
fig. 13 is a schematic diagram illustrating a pixel filling process according to a second embodiment of the present invention;
fig. 14 is a schematic structural diagram of a terminal according to a third embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "part", or "unit" used to indicate elements are used only for facilitating the description of the present invention, and have no particular meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and fixed terminals such as a Digital TV, a desktop computer, and the like, and the terminal in the present invention may be various bendable flexible screen terminals made using a flexible screen.
While the bendable mobile terminal will be described as an example in the following description, those skilled in the art will appreciate that the configuration according to the embodiment of the present invention can be applied to a fixed type of bendable terminal, in addition to elements particularly used for moving purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: an RF (Radio Frequency) unit 101, a WiFi module 102, an audio output unit 103, an a/V (audio/video) input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, a processor 110, and a power supply 111. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 1 does not constitute a limitation of the mobile terminal, that the mobile terminal may include more or less components than those shown, or some components may be combined, or different component arrangements may be adopted, and that the mobile terminal may be adapted to change the positions, materials and functions of the components according to its own bending performance.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to a short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send emails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is for receiving an audio or video signal. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of the phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, an image capture sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like which can be configured on the mobile phone, the description is omitted; the image acquisition sensor can be characterized as various cameras capable of realizing image shooting and video recording.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured by using a common screen, or various flexible screens, such as an Organic Light-Emitting Diode (OLED), an Active-matrix Organic Light-Emitting Diode (AMOLED), and the like. In this embodiment, one terminal may have one display panel, or a plurality of display panels may be provided, for example, a flexible screen having a front surface and a back surface, respectively.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a bendable touch panel 1071 and other input devices 1072. The flexible touch panel 1071, also called a flexible touch screen, can collect touch operations of a user (for example, operations of the user on or near the touch panel 1071 using any suitable object or accessory such as a finger, a stylus pen, etc.) and drive the corresponding connection device according to a preset program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node for processing signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address allocation and other functions for UE201, PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for policy and charging enforcement function (not shown).
IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
First embodiment
The panoramic image processing method provided by the embodiment is suitable for various intelligent terminals, particularly various mobile intelligent terminals, and also can be suitable for non-mobile intelligent terminals. And it should be understood that the panoramic image processing method provided in the present embodiment is applicable to both the shooting of panoramic pictures and the processing of the shot video frames during the panoramic video shooting process. The panoramic image processing method provided by the embodiment is shown in fig. 3, and includes:
s301: and determining an image deformation area in the panoramic image, wherein all pixel points on the boundary in the height direction of the image in the determined image deformation area are integrally distributed in an inwards concave arc shape.
It should be understood that, in the present embodiment, any panoramic image acquisition manner may be adopted, for example, but not limited to, after it is detected that the user triggers the panoramic image shooting function, a plurality of images are collected by the camera in the panoramic shooting process, and the collected plurality of images are synthesized to obtain a panoramic image. The synthesis action can be executed after acquiring a plurality of images, namely, a plurality of images are obtained by shooting first and then the obtained images are synthesized in sequence; the images may be combined during the shooting process, that is, each time an image is shot during the shooting process, it is determined whether there is an image shot before, if so, the image is combined with the previous image until the shooting is finished.
In addition, it should be understood that there is no particular requirement and limitation on the content in the panoramic image in the present embodiment, and the image content thereof may be any content that the user wants to photograph.
In this embodiment, the manner of determining the image deformation region in the panoramic image can be flexibly set. To facilitate understanding, the present embodiments are schematically illustrated by way of several examples.
The first method is as follows: confirmation based on preset configuration information
The method makes full use of the characteristics of the panoramic image shooting mode and the distance relation between the lens and the shot object, and can determine and set the area generating the concave arc deformation according to experience. For example, when a user holds a shooting terminal during a panoramic shooting process and rotates the shooting terminal along with the moving direction prompt information of the shooting interface to capture an image to be shot, the physical distances between the shot objects in different directions and the lens are different during the shooting process, and the physical distance between the middle shot object and the lens is generally the smallest, so that the image shot in the middle is visually largest, and the images shot in other directions are relatively smaller. Therefore, in the process of merging a plurality of shot images, in order to realize the coordination and unification of all shot objects in all the images, the images shot in the middle are stretched up and down and compressed left and right, so that the images in the middle area of the obtained panoramic image have a visual sense of concave arc deformation; therefore, in this aspect, the position information corresponding to the middle area of the panoramic image may be set in advance as the preset configuration information. At this time, when the panoramic image is processed, the process of determining the image deformation area is shown in fig. 4, and includes:
s401: and acquiring preset configuration information.
The configuration information can be stored locally in the terminal, and also can be stored in the cloud, and can be acquired from the cloud when the terminal needs the configuration information.
S402: and extracting the pre-configured position information from the acquired configuration information.
In addition, it should be understood that the position information in the present embodiment is not limited to the position information corresponding to only one area, and the position information may also correspond to a plurality of areas according to a specific application scenario.
S403: an image deformation area of the panoramic image is determined according to the position information, and the determined image deformation area is a middle area.
The second method comprises the following steps: the process of determining the image deformation region in the panoramic image as shown in fig. 5 includes:
s501: and comparing the images in each area of the panoramic image with corresponding images in a plurality of source images shot when the panoramic image is generated by combination.
In this embodiment, when extracting the image of each region in the panoramic image, the extraction may be performed according to the correspondence between each region in the panoramic image and the corresponding source image. For example, referring to fig. 6, it is assumed that the panoramic image is sequentially synthesized based on a source image 61, a source image 62, a source image 63, a source image 64, and a source image 65 which are obtained by continuous shooting, and the synthesized panoramic image is shown in fig. 7, and a region 71, a region 72, a region 73, a region 74, and a region 75 from left to right correspond to the source image 61, the source image 62, the source image 63, the source image 64, and the source image 65, respectively. In this example, the image content in the area 71 and the image content in the source image 61, the image content in the area 72 and the image content in the source image 62, the image content in the area 73 and the image content in the source image 63, the image content in the area 74 and the image content in the source image 64, and the image content in the area 75 and the image content in the source image 65 may be respectively compared, so as to identify whether the image content in the panoramic image and the image content in the corresponding source image are matched. For example, the distribution of the pixel points at the boundary of the image content in the panoramic image can be respectively identified, the distribution of the boundary pixel points corresponding to the image content in the corresponding source image can be identified, whether the distribution modes of the two are consistent or not is judged, if so, the partial image content is not deformed, even if the distribution of the pixel points at the boundary of the image content in the panoramic image is concave arc-shaped, the partial image content is not deformed, but the image content has the arc shape; when the two images are not consistent, and the pixel points of the image content boundary in the source image are not distributed in an inward concave arc shape, and the pixel points of the corresponding image content boundary in the panoramic image are distributed in an inward concave arc shape, the pixel points of the image content boundary in the panoramic image can be judged to be deformed in an inward concave arc shape.
Two examples of the inward concave arc distribution overall distribution in the present embodiment are shown in fig. 8-1 and pixel point distribution examples in fig. 8-2, respectively, and the black points in fig. 8-1 and 8-2 are schematic pixel points. It should be understood, however, that the concave arc distribution in the present embodiment is not limited to the two ways of the above examples.
S502: and determining an image deformation area in the panoramic image according to the comparison result.
Based on the comparison result, the images in the regions in the panoramic image are deformed, and then the image deformation region in the panoramic image is determined. And with the image deformation region determined in this way, the determined image deformation region may be a middle region of the panoramic image, such as shown at 73 in fig. 7, or may be another region of the panoramic image, such as shown at 72 or 74 in fig. 7. Determining an image deformation region based on this approach is more flexible than determining an image deformation region based on the foregoing approach. The specific selection mode can be flexibly set according to specific application scenes, and for example, a determination mode option can be provided for a user to select.
S302: and deleting partial pixel points in the determined height direction of the image in the image deformation area, and sequentially moving down the pixel points above the deleted pixel points to obtain the processed panoramic image.
For example, referring to fig. 9, it is assumed that a region indicated by 73 in the panoramic image is determined as an image deformation region in which H indicates a height direction of an image in the image deformation region and W indicates a width direction of the image in the image deformation region. In the step, partial pixel points are deleted in the height direction of the image in the image deformation area, and the pixel points above the deleted pixel points are sequentially moved downwards, so that the situation that the pixel points on the boundary in the height direction of the image in the image deformation area are changed from being integrally distributed in an inwards concave arc shape to being approximately the same as the distribution of the pixel points on the boundary on the corresponding source image is achieved. For example, assuming that the pixel points on the boundary of the source image are distributed on a horizontal line, the pixel points on the boundary in the height direction of the corresponding image in the panoramic image are integrally distributed in an inward concave arc shape as shown in fig. 8-1, and after the processing process shown in fig. 3 of this embodiment, the pixel points on the boundary in the height direction of the image in the panoramic image are distributed on a horizontal line as shown in fig. 10, so that the upper boundary in the height direction of the source image tends to be the same, so that the image of the arc-shaped deformation part in the panoramic image is restored to be consistent with the source image, the effect of what is photographed is seen is achieved as much as possible, the image distortion caused by deformation is avoided, and the satisfaction degree of user experience is improved.
In this embodiment, when deleting the pixel points, the selection principle of the pixel points to be deleted can be flexibly set as long as the deletion can be satisfied, and after sequentially moving down the pixel points above the deleted pixel points, the upper boundary pixel points in the panoramic image distributed in the shape of an inward concave arc are changed to be distributed in the same or similar to the upper boundary pixel points in the corresponding source image.
In this embodiment, the deleting of a pixel point refers to clearing the pixel value of the pixel point; and if the pixel point moves downwards, the pixel value of the previous pixel point is used as the pixel value of the pixel point deleted below the previous pixel point.
In the present embodiment, in the panoramic image shooting process, after synthesizing a plurality of shot pictures to obtain a panoramic image, before presenting the panoramic image to a user, the panoramic image may be processed by the panoramic image processing method provided in the present embodiment; certainly, the obtained panoramic image may also be presented to a user, and then processed by the panoramic image processing method provided in this embodiment, and the method provided in this embodiment may be specifically executed according to a processing instruction issued after the user has viewed the panoramic image, so as to implement downward movement of the pixels of the overall concave arc-shaped portion, achieve the purpose of downward compression in the height direction, reduce the vision of concave arc-shaped deformation, make the processed panoramic image and the original image tend to be consistent, achieve the effect as seen when photographed as much as possible, and improve the satisfaction of user experience.
Second embodiment
In order to facilitate understanding of the present invention, this embodiment further describes the present invention by taking a pixel deleting rule as an example on the basis of the first embodiment.
In this embodiment, the deletion of a part of the pixel points may be performed in the height direction of the image in the determined image deformation region according to the following example manner, so that the upper boundary pixel points in the panoramic image, which are distributed in an inward concave arc shape, become the same as or similar to the upper boundary pixel points in the corresponding source image. This process, as shown in fig. 12, includes:
s1201: and determining target pixel points at the arc tops among all the pixel points of which the whole boundaries in the height direction of the image in the image deformation area are distributed in an inwards concave arc shape.
For example, as shown in fig. 11, it is assumed that the pixel points shown in 1101 are pixels whose boundaries in the height direction of the image in the image deformation region are distributed in an inward concave arc shape as a whole. It should be understood that, in this embodiment, there may be one or more determined target pixel points.
S1202: in the image height direction in the image deformation area, the pixel points are deleted by taking the row pixel points as a unit and taking the row where the target pixel point is located as a center according to a rule that the number of the pixel points is sequentially increased from two edges of the image width to the center.
As shown in fig. 11, the target pixel point shown in S1101 is at the arc top position, and therefore is stretched most, so that the number of pixel points to be deleted in the column where the target pixel point 1101 is located is the most, and the width directions from the column to both sides gradually decrease, so that the pixel points are deleted from the width direction shown in fig. 11 to the center direction of the column where the target pixel point is located according to the rule that the number of pixel points sequentially increases. For example, in an example, when deleting pixel points in units of columns from the left to the center position shown in fig. 11, the first column on the left may be selected to delete 0 pixels, the second column may delete 1 pixel, the third column may delete 2 pixels, and the nth column may delete n-1 pixels; similarly, when deleting pixel points in a unit of columns from the right to the center, the right first column can be selected to delete 0 pixels, the right second column can delete 1 pixel, the right third column can delete 2 pixels, and the right nth column can delete n-1 pixels. The specific number of increments in this embodiment may be set according to the resolution specification of the panoramic image, or may be set according to an empirical value, and may be dynamically adjusted according to the processing effect of the panoramic image.
In this embodiment, in the step of deleting the pixels according to a rule that the number of the pixels is sequentially increased from the two edges of the image width to the center, the method further includes:
when the number m of the pixel points needing to be deleted in a certain row is determined to be more than or equal to 2, deleting m continuous pixel points in the row, or deleting m pixel points in the row according to the interval number of preset pixel points.
For example, in one example, the number of the preset pixel intervals may be set to 1, 2, 3, or 4, etc. In one example, the number of pixel spacing may also increase in a direction from the width of the image to the edges toward the center.
It should be understood that, in addition to the above example rule, the number of the pixel points to be deleted in each column in this embodiment may also be flexibly set according to a specific scenario. For example, user-defined settings may also be supported, and the user may make local settings. For example, for a pixel point image which inherently has concave arc distribution, a user may set that the number of deleted grids of the part of the image in the graphic processing process may be 0. If the part of the graph is deformed due to the vertical stretching, the user can flexibly set the number of the pixel points to be deleted in each row according to the visual effect, and can dynamically adjust the deleted number of the pixel points in each row in the graph processing process.
In addition, in an example of this embodiment, to avoid local distortion of the processed image, boundary pixels of the image in the image deformation region may also be extracted, and these boundary pixels are retained in the image processing process; therefore, in the embodiment, before deleting a part of the pixel points in the height direction of the image in the image deformation region, the method may further include the following steps:
extracting boundary pixel points of the image in the image deformation area;
in the process of deleting partial pixel points in the height direction of the image in the image deformation area, before deleting the determined pixel points, judging whether the pixel points are boundary pixel points, if so, reserving the boundary pixel points.
In this embodiment, in order to restore the processed image as much as possible, filling processing may be performed on the pixel points. Specifically, the blank pixel points (i.e., the pixel points with the pixel values being reset) left after the pixel points above the deleted pixel points are sequentially moved downwards are filled. In this embodiment, the filling method for the blank pixels can also be flexibly set, for example, a filling method is shown in fig. 13, and includes:
s1301: and acquiring pixel values of at least two pixel points adjacent to the blank pixel point.
S1302: and estimating the target pixel value of the blank pixel point according to the acquired pixel values of at least two pixel points.
S1303: and setting the pixel value of the blank pixel point as the target pixel value.
For example, the pixel values of at least two adjacent pixels of the blank pixel may be collected, the target pixel value of the blank pixel may be obtained by using a difference algorithm based on the pixel values of the pixels, and the pixel value of the blank pixel may be set as the target pixel value. Or the average value of the collected pixel values of a plurality of adjacent pixel points can be obtained, and the average value is set as the pixel value of the blank pixel point. In an example, the blank pixel left after the pixel is moved down can also be directly used as a white pixel.
By the method provided by the embodiment, the partial image which is subjected to the concave arc deformation in the panoramic image can be restored, so that the partial image is kept consistent with the source image as far as possible, and the WYSIWYG (what you see is what you get) effect is achieved. Meanwhile, in the panoramic image processing process, the boundary pixel values of the image are reserved to restore the image as much as possible, and the blank pixels left after the pixels are moved downwards can be further flexibly filled, so that the image processing effect can be further ensured, and the satisfaction degree of user experience is further improved.
Third embodiment
The embodiment provides a terminal, which may be various flexible screen terminals that can be bent, or a general terminal that can not be bent, and the terminal has the capability of shooting, recording or acquiring video from other terminals or networks, and the terminal in this embodiment may be, but is not limited to, the mobile terminal shown in fig. 1. Referring to fig. 14, the terminal in the present embodiment includes a processor 1401, a memory 1402, and a communication bus 1403;
the communication bus 1403 is used for realizing communication connection between the processor 1401 and the memory 1402;
the processor 1401 is configured to execute one or more programs stored in the memory 1402 to implement the steps of the panoramic image processing method as exemplified in the embodiments above.
The present embodiment also provides a computer-readable storage medium, which is applicable in various terminals, and which stores one or more programs, which are executable by one or more processors, to implement the steps of the panoramic image processing method as exemplified in the above embodiments.
In order to facilitate understanding of the present invention, the present embodiment further exemplifies the present invention with a specific application scenario based on the above terminal and computer storage medium structures.
In this example, after receiving the panoramic shooting instruction, the terminal performs shooting of multiple continuous pictures according to the set panoramic shooting rule, for example, the mobile terminal is rotated to shoot from left to right, or the mobile terminal is rotated to shoot from right to left, and the shot content may be any content, such as, but not limited to, various landscapes, buildings, people, animals, and the like; in this example, after a plurality of continuous pictures are obtained after shooting, the panoramic image is obtained by sequentially synthesizing the pictures. Before the synthesized panoramic image is displayed to a user, extracting an image deformation area in the panoramic image, for example, assuming that the currently extracted image deformation area is a middle area of the panoramic image, and an image with the middle area deformed is a building; and then obtaining boundary pixel points of the building graph in the middle area, wherein the specific boundary identification mode can adopt various image identification technologies, and is not described herein any more. Then according to the height direction of the image in the building image deformation area, integrally taking the inner concave arc-shaped boundary of each pixel point distributed in the height direction of the image, deleting the pixel points by taking the column pixel points as units and taking the column of the target pixel point as the center according to the rule that the number of the pixel points is sequentially increased from two edges of the image width direction to the center, sequentially moving down each pixel point above the deleted pixel points in the height direction of the building image, and filling the pixel values of blank pixel points left by the pixel points after moving down, thereby obtaining the processed panoramic image; the resulting panoramic image is then presented to the user. At this time, the image part which is displayed to the user and has the concave arc deformation is processed, and the processed image part is basically consistent with the corresponding source image, so that the panoramic image presented to the user can bring visual experience to the user, and the satisfaction degree of the user operation experience can be improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A panoramic image processing method, characterized by comprising the steps of:
determining an image deformation area in the panoramic image, wherein all pixel points on the boundary in the height direction of the image in the image deformation area are integrally distributed in an outward convex arc shape; the convex arc distribution is obtained by performing vertical stretching and horizontal compression on the image;
deleting partial pixel points in the height direction of the image, and sequentially moving down the pixel points above the deleted pixel points to obtain a processed panoramic image;
the deleting of the partial pixel points in the height direction of the image comprises:
determining a target pixel point at the arc top in all the pixel points which are integrally distributed in the outward convex arc shape;
and in the height direction of the image, deleting the pixel points by taking the column pixel points as units and taking the column where the target pixel point is positioned as a center according to a rule that the number of the pixel points is sequentially increased from two edges in the width direction of the image to the center.
2. The panoramic image processing method of claim 1, wherein the determining an image deformation region in the panoramic image comprises:
acquiring preset area configuration information, wherein the area configuration information comprises position information of a middle area of a corresponding panoramic image;
and determining the middle area of the panoramic image as an image deformation area according to the area configuration information.
3. The panoramic image processing method of claim 1, wherein the determining an image deformation region in the panoramic image comprises:
comparing the images in each area of the panoramic image with corresponding images in a plurality of source images shot when the panoramic image is generated in a combined mode;
and determining an image deformation area in the panoramic image according to the comparison result.
4. The panoramic image processing method according to claim 3, wherein the step of deleting the pixels according to a rule that the number of pixels increases in order from both edges of the image in the width direction toward the center includes:
when the number m of the pixel points needing to be deleted in a certain row is determined to be more than or equal to 2, deleting m continuous pixel points in the row, or deleting m pixel points in the row according to the interval number of preset pixel points.
5. The panoramic image processing method according to any one of claims 1 to 4, further comprising, before deleting partial pixel points in the height direction of the image:
extracting boundary pixel points of the image in the image deformation area;
in the process of deleting partial pixel points in the height direction of the image, before deleting the determined pixel points, judging whether the pixel points are boundary pixel points, if so, keeping the boundary pixel points.
6. The panoramic image processing method of any one of claims 1 to 4, wherein after sequentially moving down the pixels above the deleted pixel, the method further comprises:
and filling the blank pixel points left after the pixel points are moved downwards.
7. The panoramic image processing method of claim 6, wherein the filling the blank pixels left after the pixel is shifted down comprises:
acquiring pixel values of at least two pixel points adjacent to the blank pixel point;
estimating a target pixel value of the blank pixel point according to the pixel values of the at least two pixel points;
and setting the pixel value of the blank pixel point as the target pixel value.
8. A terminal, characterized in that the terminal comprises a processor, a memory and a communication bus;
the communication bus is used for realizing communication connection between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the panoramic image processing method according to any one of claims 1 to 7.
9. A computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the panoramic image processing method according to any one of claims 1 to 7.
CN201711490078.6A 2017-12-29 2017-12-29 Panoramic image processing method, terminal and computer readable storage medium Active CN108305218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711490078.6A CN108305218B (en) 2017-12-29 2017-12-29 Panoramic image processing method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711490078.6A CN108305218B (en) 2017-12-29 2017-12-29 Panoramic image processing method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108305218A CN108305218A (en) 2018-07-20
CN108305218B true CN108305218B (en) 2022-09-06

Family

ID=62867883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711490078.6A Active CN108305218B (en) 2017-12-29 2017-12-29 Panoramic image processing method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108305218B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581512A (en) * 2019-09-27 2021-03-30 鲁班嫡系机器人(深圳)有限公司 Image matching, 3D imaging and posture recognition method, device and system
CN112634124B (en) * 2020-12-10 2024-04-12 深兰工业智能创新研究院(宁波)有限公司 Image warping method, image warping device, electronic apparatus, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247513A (en) * 2007-12-25 2008-08-20 谢维信 Method for real-time generating 360 degree seamless full-view video image by single camera
CN103929565A (en) * 2013-01-10 2014-07-16 柯尼卡美能达株式会社 Image processing device and image processing method
CN105704398A (en) * 2016-03-11 2016-06-22 咸阳师范学院 Video processing method
CN105959546A (en) * 2016-05-25 2016-09-21 努比亚技术有限公司 Panorama shooting device and method
CN106373178A (en) * 2015-07-22 2017-02-01 阿迪达斯股份公司 Method and apparatus for generating an artificial picture
CN107105166A (en) * 2017-05-26 2017-08-29 努比亚技术有限公司 Image capturing method, terminal and computer-readable recording medium
CN107146195A (en) * 2016-07-27 2017-09-08 深圳市量子视觉科技有限公司 Sphere image split-joint method and device
CN107478235A (en) * 2017-08-18 2017-12-15 内蒙古财经大学 The dynamic map based on template obtains system under network environment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1851948A1 (en) * 2005-02-09 2007-11-07 Previsite Method for bulk provision of interactive virtual visits pour multimedia broadcast, and system therefor
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
KR20150006191A (en) * 2013-07-08 2015-01-16 삼성전자주식회사 Method for operating panorama image and electronic device thereof
CN107240072B (en) * 2017-04-27 2020-06-05 南京秦淮紫云创益企业服务有限公司 Screen brightness adjusting method, terminal and computer readable storage medium
CN107094236A (en) * 2017-05-19 2017-08-25 努比亚技术有限公司 Panorama shooting method, mobile terminal and computer-readable recording medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247513A (en) * 2007-12-25 2008-08-20 谢维信 Method for real-time generating 360 degree seamless full-view video image by single camera
CN103929565A (en) * 2013-01-10 2014-07-16 柯尼卡美能达株式会社 Image processing device and image processing method
CN106373178A (en) * 2015-07-22 2017-02-01 阿迪达斯股份公司 Method and apparatus for generating an artificial picture
CN105704398A (en) * 2016-03-11 2016-06-22 咸阳师范学院 Video processing method
CN105959546A (en) * 2016-05-25 2016-09-21 努比亚技术有限公司 Panorama shooting device and method
CN107146195A (en) * 2016-07-27 2017-09-08 深圳市量子视觉科技有限公司 Sphere image split-joint method and device
CN107105166A (en) * 2017-05-26 2017-08-29 努比亚技术有限公司 Image capturing method, terminal and computer-readable recording medium
CN107478235A (en) * 2017-08-18 2017-12-15 内蒙古财经大学 The dynamic map based on template obtains system under network environment

Also Published As

Publication number Publication date
CN108305218A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN108259781B (en) Video synthesis method, terminal and computer-readable storage medium
CN107820014B (en) Shooting method, mobile terminal and computer storage medium
CN109068052B (en) Video shooting method, mobile terminal and computer readable storage medium
CN107948360B (en) Shooting method of flexible screen terminal, terminal and computer readable storage medium
CN107959795B (en) Information acquisition method, information acquisition equipment and computer readable storage medium
CN110072061B (en) Interactive shooting method, mobile terminal and storage medium
CN109391853B (en) Barrage display method and device, mobile terminal and readable storage medium
CN108280136B (en) Multimedia object preview method, equipment and computer readable storage medium
CN111935402B (en) Picture shooting method, terminal device and computer readable storage medium
CN108459799B (en) Picture processing method, mobile terminal and computer readable storage medium
CN108062162B (en) Flexible screen terminal, placement form control method thereof and computer-readable storage medium
CN109120858B (en) Image shooting method, device, equipment and storage medium
CN111866388B (en) Multiple exposure shooting method, equipment and computer readable storage medium
CN112511741A (en) Image processing method, mobile terminal and computer storage medium
CN107896304B (en) Image shooting method and device and computer readable storage medium
CN107241504B (en) Image processing method, mobile terminal and computer readable storage medium
CN112188058A (en) Video shooting method, mobile terminal and computer storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108282608B (en) Multi-region focusing method, mobile terminal and computer readable storage medium
CN108305218B (en) Panoramic image processing method, terminal and computer readable storage medium
CN107395971B (en) Image acquisition method, image acquisition equipment and computer-readable storage medium
CN112965680A (en) Screen projection method, screen projection initiating device and storage medium
CN112135045A (en) Video processing method, mobile terminal and computer storage medium
CN108600639B (en) Portrait image shooting method, terminal and computer readable storage medium
CN107205130B (en) Video recording method based on double cameras, terminal and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220816

Address after: Room 623, Building 1, No. 132, Shenjia Road, Dongxin Street, Xiacheng District, Hangzhou City, Zhejiang Province, 310006

Applicant after: Zhejiang Shuike Culture Group Co.,Ltd.

Address before: 10 / F, block a, Han's innovation building, 9018 Beihuan Avenue, gaoxinyuan, Nanshan District, Shenzhen, Guangdong Province

Applicant before: NUBIA TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant