CN111031377B - Mobile terminal and video production method - Google Patents

Mobile terminal and video production method Download PDF

Info

Publication number
CN111031377B
CN111031377B CN201911205831.1A CN201911205831A CN111031377B CN 111031377 B CN111031377 B CN 111031377B CN 201911205831 A CN201911205831 A CN 201911205831A CN 111031377 B CN111031377 B CN 111031377B
Authority
CN
China
Prior art keywords
face
picture
image data
original picture
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911205831.1A
Other languages
Chinese (zh)
Other versions
CN111031377A (en
Inventor
孙喜洲
张昊
赵雨岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN201911205831.1A priority Critical patent/CN111031377B/en
Publication of CN111031377A publication Critical patent/CN111031377A/en
Application granted granted Critical
Publication of CN111031377B publication Critical patent/CN111031377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a mobile terminal and a video making method, relating to the technical field of internet and aiming at solving the problem that the made video has poor ornamental value due to different positions of human faces in pictures when human face video is made in the prior art, wherein the method comprises the following steps: the memory is used for storing original pictures and video files; a processor for converting an original picture into a texture picture; determining image data containing the face in the texture picture according to the region information of the face in the original picture; determining the position information of the face at the center position of a display screen according to the size of the display screen of the mobile terminal; and drawing the position information and the image data containing the human face into an encoder to generate a video file. According to the embodiment of the invention, the image data of the face area in the picture and the position information of the face at the central position of the display screen are drawn into the encoder, so that the picture can be made into a video in a face-centered manner, and the ornamental value of the video is improved.

Description

Mobile terminal and video production method
Technical Field
The invention relates to the technical field of internet, in particular to a mobile terminal and a video making method.
Background
With the development of intellectualization, the face area in the picture stored in the mobile terminal can be made into a video for the user to watch. At present, a video production process includes obtaining picture data from a storage area, and then encoding the picture data by using a video encoder to generate a video image.
In general, when a video is created using a picture in a mobile terminal, a person is mainly viewed. However, in the original pictures of the video, there are some pictures in which the positions of the people are more biased. For example, the person in the picture is in the lower left corner of the picture, the person in the picture is in the lower right corner of the picture, the person in the picture is in the upper left corner of the picture, and so on. When these pictures are made into a video, the video is rendered less enjoyable.
Disclosure of Invention
The invention provides a mobile terminal and a video making method, which are used for solving the problem that the made video has poor appreciation caused by different positions of human faces in pictures when human face videos are made in the prior art.
In a first aspect, an embodiment of the present invention provides a mobile terminal, including: a processor and a memory;
the memory is used for storing original pictures and video files;
the processor is used for converting the original picture into a texture picture;
determining image data containing the face in the texture picture according to the region information of the face in the original picture;
determining position information of a face at the center position of a display screen according to the size of the display screen of the mobile terminal;
and drawing the position information and the image data containing the human face into an encoder to generate a video file.
According to the mobile terminal, the image data of the face contained in the texture picture is determined according to the area information of the face in the original picture, and the position information of the face at the central position of the display screen of the mobile terminal and the image data are directly made into the video.
In one possible implementation, the processor is specifically configured to:
adjusting image data containing human faces in the texture pictures according to a preset video animation effect;
and drawing the position information and the adjusted image data into an encoder to generate a video file.
According to the mobile terminal, the animation effect is added in the video, the image data including the human face in the texture picture is adjusted according to the preset animation effect of the video, the adjusted image data and the position information are drawn into the encoder to generate the video file, the ornamental performance of the video is improved, and the experience of a user is improved.
In one possible implementation, the processor is specifically configured to:
the processor is specifically configured to:
the image data includes: the size of the image;
determining an animation effect corresponding to the texture picture according to a preset video animation effect;
copying image data including human faces in the texture pictures according to the number of pictures required for forming animation effects corresponding to the texture pictures to obtain a plurality of image data;
determining the size information of each picture required for forming the animation effect corresponding to the texture picture according to the animation effect corresponding to the texture picture;
and adjusting the image size in the plurality of image data according to the size information of each picture.
The mobile terminal can determine the own animation effect of the texture picture according to the video animation effect, determine the number of pictures required by the animation effect according to the own animation effect, copy the image data of the face part contained in the texture picture to obtain a plurality of image data, determine the number of the pictures required by the animation effect according to the own animation effect of the texture picture, copy the image data of the face part contained in the texture picture to obtain a plurality of image data, and adjust the image data according to the own animation effect of the texture picture.
In one possible implementation, the processor is specifically configured to:
identifying the position of the face of an original picture, and obtaining a face frame according to the position of the face of the original picture;
if the original picture comprises a face frame, determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture; or
And if the original picture comprises a plurality of face frames, determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
When the mobile terminal acquires the region information of the face in the original picture, some pictures comprise the face, the face frame is used as a reference, the region information of the face in the original picture is determined according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture, some pictures comprise a plurality of faces, the largest face is used as a reference, the region information of the face in the original picture is determined according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture, and the displayed face region can be determined according to the viewing angle according to the conditions of different original pictures.
In one possible implementation, the processor is specifically configured to:
determining image data of the face contained in the texture picture through an OpenGL three-dimensional image processing library according to the area information of the face in the original picture;
and drawing the position information and the image data containing the human face to a virtual display screen in an encoder so that the encoder writes the image data on the virtual display screen into a video file.
According to the mobile terminal, the virtual display screen in the encoder is matched with the OpenGL three-dimensional image processing library for use, the adjusted image data are directly drawn on the virtual display screen in the encoder, namely the surface in the encoder is drawn, then the image data on the surface are written into the video file by the encoder, the condition that the memory overflows due to the fact that the adjusted image data are stored is avoided, and the image processing performance is improved.
In a second aspect, a video production method provided in an embodiment of the present invention is applied to a mobile terminal, and includes:
converting an original picture into a texture picture;
determining image data containing the face in the texture picture according to the region information of the face in the original picture;
determining position information of a face at the center position of a display screen according to the size of the display screen of the mobile terminal;
and drawing the position information and the image data containing the human face into an encoder to generate a video file.
In one possible implementation, before the rendering the position information and the image data containing the human face into an encoder to generate a video file, the method further includes:
adjusting image data containing human faces in the texture pictures according to a preset video animation effect;
the drawing the position information and the image data containing the human face into an encoder to generate a video file comprises:
and drawing the position information and the adjusted image data into an encoder to generate a video file.
In one possible implementation, the image data includes: the size of the image;
the adjusting the image data containing the human face in the texture picture according to the preset video animation effect comprises the following steps:
determining an animation effect corresponding to the texture picture according to a preset video animation effect;
copying image data including human faces in the texture pictures according to the number of pictures required for forming animation effects corresponding to the texture pictures to obtain a plurality of image data;
determining the size information of each picture required for forming the animation effect corresponding to the texture picture according to the animation effect corresponding to the texture picture;
and adjusting the image size in the plurality of image data according to the size information of each picture.
In a possible implementation manner, the method for acquiring the region information of the face in the original picture includes:
identifying the position of the face of an original picture, and obtaining a face frame according to the position of the face of the original picture;
if the original picture comprises a face frame, determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture; or
And if the original picture comprises a plurality of face frames, determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
In a possible implementation manner, the determining, according to the region information of the face in the original picture, that the texture picture includes image data of the face includes:
determining image data of the face contained in the texture picture through an OpenGL three-dimensional image processing library according to the area information of the face in the original picture;
the drawing the position information and the image data containing the human face into an encoder to generate a video file comprises:
and drawing the position information and the image data containing the human face to a virtual display screen in an encoder so that the encoder writes the image data on the virtual display screen into a video file.
In a third aspect, the present application further provides a computer storage medium having a computer program stored thereon, which when executed by a processing unit, performs the steps of the video production method according to the second aspect.
In addition, for technical effects brought by any one implementation manner of the second aspect to the third aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention and are not to be construed as limiting the invention.
Fig. 1 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a software architecture of a mobile terminal according to an embodiment of the present invention;
fig. 3 is a flowchart of a video production method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a region where the position information of a point O and a point P in a region of a face in an original picture is region information according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a face centered on a display screen according to an embodiment of the present invention;
fig. 6 is an operation diagram illustrating a video production performed by a user on a picture of a mobile terminal according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a video made of original videos for men, women and clown according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a region of a human face according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an original picture, a position of a face, and region information of the face in a rectangular coordinate system according to an embodiment of the present invention;
FIG. 10 is a flow chart illustrating an overall method of video production according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present application will be described in detail and removed with reference to the accompanying drawings. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Fig. 1 shows a schematic configuration of a mobile terminal 100.
The following describes an embodiment specifically by taking the mobile terminal 100 as an example. It should be understood that the mobile terminal 100 shown in fig. 1 is merely an example, and that the mobile terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of a mobile terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the mobile terminal 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the mobile terminal 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the mobile terminal 100 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the methods described in the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the mobile terminal 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front of the mobile terminal 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 100. In particular, the display unit 130 may include a display screen 132 disposed on the front surface of the mobile terminal 100. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces described herein.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the mobile terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The mobile terminal 100 may further include at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The mobile terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and the mobile terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The mobile terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the mobile terminal 100 may help a user to receive and transmit e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, which provides a wireless broadband internet access for the user.
The processor 180 is a control center of the mobile terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the mobile terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, and a touch response, and the processing method described in the embodiments of the present application. Further, the processor 180 is coupled with the display unit 130.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the mobile terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.
The mobile terminal 100 also includes a power supply 190 (e.g., a battery) that powers the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The mobile terminal 100 may also be configured with power buttons for powering the terminal on and off, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the mobile terminal 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the mobile terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the mobile terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the mobile terminal 100 software and hardware in connection with capturing a photo scene.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 140.
The mobile terminal 100 in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a television, and the like.
The mobile terminal provided by the embodiment of the invention adopts the OpenGL three-dimensional image processing library, namely, the function in the OpenGL three-dimensional image processing library is called, so that the purpose of directly drawing the region containing the face in the original picture on the encoder is achieved, the encoder directly encodes the image data of the region containing the face to generate the video file after acquiring the image data of the region containing the face, and because I/O operation is not consumed in the process of making the video to form the picture and store the picture in the memory, the processing speed is improved, and meanwhile, because the picture formed by the region containing the face is not required to be stored, the condition of memory overflow caused by intermediate consumables in the process of making the video can be avoided.
Hereinafter, a video production method performed in the above-described mobile terminal will be described in detail with reference to the accompanying drawings.
As shown in fig. 3, the video production method includes:
s300: and converting the original picture into a texture picture.
The type of data that can be processed by the OpenGL three-dimensional image processing library is Texture picture (Texture). The method for converting the original picture into the texture picture includes various methods, and three methods are introduced as follows:
the first method is as follows: firstly, acquiring a handle of an original picture, wherein the handle is a special pointer, and when an application needs to refer to an object managed by a memory in a terminal, the application uses the handle for reference. And secondly, obtaining bitmap information according to the handle of the original picture, and finally establishing a texture through a gluBuild2DMipmaps function according to an RGB (RGB color mode) value in the bitmap information to obtain the texture picture. The gluBuild2DMipmaps function is a function used for creating a texture, and parameters required by the function are dimensions for loading the texture, such as a 2D plane graph, composition of color components, RGB values, size information of a bitmap, and the like.
The second method comprises the following steps: firstly, acquiring bitmap information by adopting an OpenGL three-dimensional image processing library, acquiring bitmap information by adopting an auxRGBIImageRec and an auxDBImageLoad function, and then establishing textures by using a gluBuiled 2DMipmaps function. The auxRGBIMagageRec function and the auxDBImageLoad function are both functions in an OpenGL three-dimensional image processing library.
The third method comprises the following steps: first, the file structure of the original picture, including the file header, the information header and the data, is read, and a Texture (Texture) is created through glTexImage2D according to the file structure of the read original picture.
S301: and determining the image data containing the face in the texture picture according to the region information of the face in the original picture.
The region information of the face in the original picture is represented in the following way: and position information of the upper left corner and the lower right corner of the area of the face in the original picture in the picture. C1 represents the region of the face in the original picture, the position information of the upper left point O at the region boundary of the face in the picture, and the position information of the lower right point P at the region boundary of the face in the picture, as shown in fig. 4. The position information of the point O and the point P is the area information of the area C1 of the face in the original picture.
After the position information of the upper left corner and the lower right corner of the area of the face in the original picture in the picture is obtained, the area of the face in the texture picture is compared, and the obtained image data of the area is the image data of the face in the texture picture.
S302: and determining the position information of the face at the central position of the display screen according to the size of the display screen of the mobile terminal.
In detail, the size of the display screen of the mobile terminal may be the length of a diagonal line of the display screen of the mobile terminal, a midpoint of the diagonal line is taken as a center position of the display screen, the face is then disposed at the midpoint of the diagonal line, the midpoint position is position information of the center of the face in the display screen, and the position information of the center of the face in the display screen may be used as the position information of the determined face at the center position of the display screen.
As shown in fig. 5, the midpoint of the diagonal line of the display screen is Q1, and the position of the midpoint Q1 is the position of the center Q2 of the face on the display screen. For example, if the picture corresponding to the face is a positive direction, the center of the square is the midpoint Q2, and if the picture corresponding to the face is a rectangle, the center of the diagonal of the rectangle is the center Q2.
S303: and drawing the position information and the image data containing the human face into an encoder to generate a video file.
According to the scheme, the image data of the face part in the original picture and the position information of the face at the central position of the display screen of the mobile terminal are directly drawn into the encoder to be manufactured into the video, the pictures in the formed video are all the face part, and the pictures are all at the central position of the display screen, so that the ornamental value of the video is improved.
When the mobile terminal makes 3 pictures including a man, a woman and a clown in the memory, the face part of the man is in the upper left corner of the picture, the face part of the woman is in the upper right corner of the picture, and the face part of the clown is in the upper left corner of the picture. When the method is adopted, the face part of a man is positioned at the upper left corner of the picture, the face part of a woman is positioned at the upper right corner of the picture, the face part of a clown is displayed at the center position of the display screen to be manufactured into a video, and when a user watches the video, the user always pays attention to the center position, so that the ornamental performance is good.
The original picture provided by the embodiment of the invention can be a picture shot by a camera in the mobile terminal, can also be a picture downloaded from the internet or an application program, and can also be a picture intercepted in the process of screenshot.
In practical applications, the pictures obtained in the above various manners are all stored in the memory, and for this reason, when the method of the present invention is executed, each original picture stored in the memory may be sequentially obtained through the path in which each original picture is stored according to the time sequence. And then, performing the following steps on each obtained original picture one by one, obtaining the area information of the face in the current original picture, converting the current original picture into Texture, determining the image data containing the face in the Texture according to the area information of the face in the current original picture, adjusting the image data containing the face in the Texture according to a preset video animation effect, and drawing the adjusted image data into an encoder. That is, the first image data is drawn into the encoder, the encoder performs encoding, the second image data is drawn into the encoder, the encoder performs encoding, and the video file is completed until all the original pictures are processed.
The operation of triggering the video production can be that a user clicks a designated button, and the mobile terminal starts to produce the video after detecting that the designated button is clicked.
The original pictures for making the video may be not only all the pictures in the above-mentioned mobile terminal, but also pictures selected by the user in the mobile terminal. In this regard, the present invention is not particularly limited.
For example, as shown in fig. 6, when the user clicks a designated button in the picture application, the mobile terminal makes 3 pictures including men, women, and clown in the memory into a video file of mp4 with the face portion of men, the face portion of women, and the face portion of clown as display references, generates the video file, and can play the video file after the user clicks the video file.
In order to improve the ornamental performance of the video, the embodiment of the invention further comprises:
and adjusting the image data containing the human face in the texture picture according to a preset video animation effect, and drawing the position information and the adjusted image data into an encoder to generate a video file.
The video animation effect refers to the animation effect adopted in the video manufactured by the original picture, so that the image data of the human face contained in the texture picture can be adjusted according to the animation effect of the whole video.
Wherein the image data includes an image size.
In practical application, if the sizes of the extracted image data including the human face from the plurality of original pictures corresponding to the production video are different, the size of the extracted image data including the human face is smaller, for example, when the image includes a plurality of human faces, or when the image is mainly a landscape, the size of the extracted image data including the human face is larger, for example, the image is a big photo, and the preset video animation effect is the same size.
Under the above circumstances, the specific implementation procedure of the embodiment of the present invention is as follows: according to the preset video animation effect, the size of an image forming the preset video animation effect is determined, and then the image size of image data containing the human face in the current texture image is adjusted according to the size. If the image size of the image data containing the face in the current texture picture is smaller, the image size of the image data containing the face in the current texture picture is enlarged; and if the image size of the image data containing the face in the current texture picture is larger, reducing the image size of the image data containing the face in the current texture picture.
In practical applications, if the preset video animation effect is composed of a plurality of animation effects, each texture picture has its own animation effect, wherein the animation effects are, for example, gradually reducing or enlarging the pictures.
For such a situation, the specific implementation procedure of the embodiment of the present invention is as follows: determining an animation effect corresponding to the texture picture according to a preset video animation effect; copying image data containing human faces in the texture pictures according to the number of pictures required for forming animation effects corresponding to the texture pictures to obtain a plurality of image data; and adjusting the plurality of image data according to the animation effect corresponding to the texture picture.
When the animation effect corresponding to the texture picture is to gradually enlarge or gradually reduce the picture, the mobile terminal provided by the invention can adjust the image size of the image data by adjusting the mode of the plurality of image data according to the animation effect corresponding to the texture picture.
Specifically, according to the animation effect corresponding to the texture picture, the size information of each picture required for forming the animation effect corresponding to the texture picture is determined;
and adjusting the image size in the plurality of image data according to the size information of each picture.
As shown in fig. 7, when the (n +1) th to (n +3) th images in the video are produced from the original image provided by the present invention, the faces of the same woman are used, and the animation effect corresponding to the faces of the woman is "gradually enlarge the images", and 3 images are obtained by copying the image data including the faces in the texture image according to the number of 3 images required to form "gradually enlarge the images". Then, the size information of the 3 images is determined, and the size of the 3 image data is adjusted to the corresponding 3 size information. So that an animation of "gradually enlarging the picture" can be formed.
In order to improve the appreciation of the user, the pictures in the video not only show the individual faces, but also include other areas except the faces. Meanwhile, in the original picture of the video, one face or a plurality of faces may be included, and only one face is used for showing one original picture when the video is made.
Based on the above situation, the method for acquiring the region information of the face in the original picture provided by the embodiment of the present invention is that the position of the face is determined first, and then the region information of the face is determined according to the position of the face, where the position of the face is different from the region information of the face, the position of the face is the position of an individual face, and the region information of the face is the region information including the face and other regions except the face. Referring to fig. 8, C2 is a region indicated by oblique lines at the face position of the original picture, and C3 is a region other than the face. C2 and C3 are called regions of the face.
The method specifically comprises the following steps:
and identifying the position of the face of the original picture, and obtaining a face frame according to the position of the face of the original picture.
If the original picture comprises a face frame, determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture; or
And if the original picture comprises a plurality of face frames, determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
The area information of the face in the original picture can be acquired in the following way:
and inputting the original picture into the face positioning model to obtain the position of the face in the original picture.
The face localization model is trained by the following training method.
Obtaining a training set, the training set comprising: the method comprises the steps that when an original picture is input, a face positioning model positions the characteristic positions of a face from the original picture, wherein the characteristics of the face are the characteristics of eyes, a nose, a mouth, a beard and the like, the characteristic positions of the face are positioned to obtain the position of the face, the position of the face and the position of the face to be calibrated are input into a loss function together, the face positioning model is adjusted until the value obtained by inputting the position of the face and the position of the face to be calibrated into the loss function is smaller than a certain value, namely the position of the face is similar to the position of the face to be calibrated, and the face positioning model is trained.
The method for determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture comprises the following steps:
the introduced method for recording the position of the face is that the coordinate information of the upper left corner and the lower right corner of the face is obtained, then the face frame is determined according to the coordinate information of the upper left corner and the lower right corner, the frame of the region of the face in the original picture is determined according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture, and the position information of the upper left vertex and the lower right vertex in the four vertexes of the frame is the region information.
Referring to fig. 9, in the rectangular coordinate system, the points at the upper left corner of each of the four vertices of the original picture are a (x)1,y1) Lower left corner point C (x)1,y2) Point D (x) in the upper right corner2,y1) Point B (x) at the lower right corner2,y2). The position of the face is a point a (x) at the upper left corner3,y3) Lower right corner point b (x)4,y4) Then, the four nail points of the face frame corresponding to the face position are the points a (x) at the upper left corner respectively3,y3) Lower left corner point c (x)3,y4) Point d (x) in the upper right corner4,y3) Lower right corner point b (x)4,y4)。
Firstly, four distances between the edge of the face frame and the edge corresponding to the original picture are calculated, and the first distance s1=y1-y3A second distance s2=x3-x1A third distance s3=y4-y2A fourth distance s2=x2-x4Then a first distance s1Shortest, then by the first distance s1To increase the length, i.e. each side of the face frame is increased by a first distance s1An area of the face composed of four vertices e, f, g, h is formed, as a frame composed of dotted lines shown in fig. 8.
Wherein, the abscissa of the vertex e of the human face area is x3-s1I.e. x3-y1+y3The ordinate of the vertex e is y1
The abscissa of the vertex f of the face region is the same as the abscissa of the vertex e, and x is3-y1+y3The ordinate of the vertex e is y1-2s1-(y3-y4) I.e. y1-2(y1-y3)-(y3-y4)=(y3+y4)-y1
The abscissa of the vertex g of the region of the face is x4+s1I.e. x4+y1-y3The ordinate of the vertex g is (y) same as the ordinate of the vertex f3+y4)-y1
The abscissa of the vertex h of the face region is the same as the abscissa of the vertex g, and x is4+y1-y3The ordinate of the vertex h is y which is the same as the ordinate of the vertex e1
The region information of the face in the original picture in fig. 9 is the abscissa (x) of the vertex e3-y1+y3) Ordinate y of the vertex e1Abscissa (x) of vertex g4+y1-y3) Ordinate ((y) of vertex e)3+y4)-y1)。
Compared with the original picture which is stored in a fixed storage area, the area information of the face in the original picture is determined later, so that the area information is stored in a different area from the corresponding original picture, in order to reference the area information stored in the different area and the corresponding original picture more conveniently in a video production execution mode, a key value pair is established between the area information and the corresponding original picture, the area information and the corresponding original picture are used as a key and a value in the key value pair, and an index table consisting of the physical storage addresses of the key and the value in the key value pair is stored. When the region information and the corresponding original picture need to be extracted, the region information and the corresponding physical storage address of the original picture can be acquired through the key value pair, and the region information and the corresponding data of the original picture can be acquired through the region information and the corresponding physical storage address of the original picture to carry out video production processing.
Generally, a bitmap cut from a mobile terminal is stored in a memory in the mobile terminal, a user can view the bitmap stored in the memory, and when a video is produced, the bitmap is acquired according to a stored path to produce the video, so that the user can view the bitmaps in the mobile terminal and can view the video produced by the bitmaps.
However, the process of storing the clipped bitmap in the memory of the mobile terminal requires a Central Processing Unit (CPU) of the mobile terminal to perform an I/O operation, which is a time-consuming operation, and since the image data of the bitmap belongs to data occupying a relatively large memory, the clipped bitmap is stored in the memory of the mobile terminal, which easily causes memory overflow.
Based on the above, the invention determines the image data of the face contained in the texture picture through the OpenGL three-dimensional image processing library according to the area information of the face in the original picture. And finally, drawing the position information and the image data containing the human face to a virtual display screen in an encoder so that the encoder writes the image data on the virtual display screen into a video file. The virtual display screen is a surface, and the adjusted image data is directly drawn to the surface of the encoder, namely the encoder acquires the image data, and then the image data is encoded and written into a video file.
In the scheme, an original picture in a memory is converted into Texture, image data containing a face in the Texture is determined according to area information of the face in the original picture, the image data containing the face in the Texture is adjusted according to a preset video animation effect, the image data containing the face in the Texture is drawn to a surface of an encoder, the surface of the encoder acquires the image data, and the encoder performs encoding and writes the image data into a video file. It can be seen that the determined image data containing the human face is not generated into a bitmap, and the bitmap is stored into a visible picture by using I/O operation, so that the processing capacity can be improved, the memory space is not occupied by the image data containing the human face, and the situation that the memory overflows easily in the video production process is reduced.
An overall flow chart of video production is shown below in conjunction with fig. 10. As shown in fig. 10, the method includes:
s1000: an original picture is extracted from a plurality of original pictures of a production video.
S1001: and identifying the position of the face of the original picture, and obtaining a face frame according to the position of the face of the original picture.
S1002: and judging whether the original picture has a face. If yes, step S1003 is executed, and if no, step S1004 is executed.
S1003: and determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture.
S1004: and determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
S1005: and converting the original picture into a texture picture.
S1006: and determining image data of the face contained in the texture picture through an OpenGL three-dimensional image processing library according to the area information of the face in the original picture.
S1007: and determining the position information of the face at the central position of the display screen according to the size of the display screen of the mobile terminal.
S1008: the position information and the image data are drawn onto a virtual display screen in the encoder so that the encoder writes the image data on the virtual display screen to the video file.
Before S1008 is executed and after S1006 is executed, in order to increase the appreciation of the video, the image data of the face included in the texture picture may be adjusted according to a preset video animation effect.
S1009: and judging whether a plurality of original pictures for making the video are all coded into the video file, if so, obtaining a result, and if not, executing the step S1000.
Fig. 11 illustrates a mobile terminal 1100 according to an example embodiment, the apparatus comprising: a processor 1110 and a memory 1120;
the memory 1120 is used for storing original pictures and video files;
the processor 1110 is configured to convert an original picture into a texture picture;
determining image data containing the face in the texture picture according to the region information of the face in the original picture;
determining position information of a face at the center position of a display screen according to the size of the display screen of the mobile terminal;
and drawing the position information and the image data containing the human face into an encoder to generate a video file.
Optionally, the processor 1110 is specifically configured to:
adjusting image data containing human faces in the texture pictures according to a preset video animation effect;
and drawing the position information and the adjusted image data into an encoder to generate a video file.
Optionally, the processor 1110 is specifically configured to:
the image data includes: the size of the image;
determining an animation effect corresponding to the texture picture according to a preset video animation effect;
copying image data including human faces in the texture pictures according to the number of pictures required for forming animation effects corresponding to the texture pictures to obtain a plurality of image data;
determining the size information of each picture required for forming the animation effect corresponding to the texture picture according to the animation effect corresponding to the texture picture;
and adjusting the image size in the plurality of image data according to the size information of each picture.
Optionally, the processor 1110 is specifically configured to:
identifying the position of the face of an original picture, and obtaining a face frame according to the position of the face of the original picture;
if the original picture comprises a face frame, determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture; or
And if the original picture comprises a plurality of face frames, determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
Optionally, the processor 1110 is specifically configured to:
determining image data of the face contained in the texture picture through an OpenGL three-dimensional image processing library according to the area information of the face in the original picture;
and drawing the position information and the image data containing the human face to a virtual display screen in an encoder so that the encoder writes the image data on the virtual display screen into a video file.
In an exemplary embodiment, a storage medium comprising instructions, such as a memory comprising instructions, executable by processor 1110 of mobile terminal 1100 to perform the above-described method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
An embodiment of the present invention further provides a computer program product, which, when running on an electronic device, enables the electronic device to execute any one of the video production methods described above in the embodiments of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. A mobile terminal, comprising: a processor and a memory;
the memory is used for storing original pictures and video files;
the processor is used for converting the original picture into a texture picture;
determining image data of a face contained in the texture picture through an OpenGL three-dimensional image processing library according to area information of the face in the original picture, wherein the area information of the face is area information of the face and other areas except the face, and the other areas are determined according to the shortest distance between the edge of a face frame and the edge of the original picture;
determining position information of a face at the center position of a display screen according to the size of the display screen of the mobile terminal, wherein the center position is the middle point of a diagonal line of the display screen;
and drawing the position information and the image data containing the human face to a virtual display screen in an encoder so that the encoder writes the image data on the virtual display screen into a video file.
2. The mobile terminal of claim 1, wherein the processor is specifically configured to:
adjusting image data containing human faces in the texture pictures according to a preset video animation effect;
and drawing the position information and the adjusted image data into an encoder to generate a video file.
3. The mobile terminal of claim 2, wherein the processor is specifically configured to:
the image data includes: the size of the image;
determining an animation effect corresponding to the texture picture according to a preset video animation effect;
copying image data including human faces in the texture pictures according to the number of pictures required for forming animation effects corresponding to the texture pictures to obtain a plurality of image data;
determining the size information of each picture required for forming the animation effect corresponding to the texture picture according to the animation effect corresponding to the texture picture;
and adjusting the image size in the plurality of image data according to the size information of each picture.
4. The mobile terminal of claim 1, wherein the processor is specifically configured to:
identifying the position of the face of an original picture, and obtaining a face frame according to the position of the face of the original picture;
if the original picture comprises a face frame, determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture; or
And if the original picture comprises a plurality of face frames, determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
5. A video production method is applied to a mobile terminal and comprises the following steps:
converting an original picture into a texture picture;
determining image data of a face contained in the texture picture through an OpenGL three-dimensional image processing library according to area information of the face in the original picture, wherein the area information of the face is area information of the face and other areas except the face, and the other areas are determined according to the shortest distance between the edge of a face frame and the edge of the original picture;
determining position information of a face at the center position of a display screen according to the size of the display screen of the mobile terminal, wherein the center position is the middle point of a diagonal line of the display screen;
and drawing the position information and the image data containing the human face to a virtual display screen in an encoder so that the encoder writes the image data on the virtual display screen into a video file.
6. The method of claim 5, wherein before the rendering the position information and the image data containing the human face into an encoder to generate a video file, the method further comprises:
adjusting image data containing human faces in the texture pictures according to a preset video animation effect;
the drawing the position information and the image data containing the human face into an encoder to generate a video file comprises:
and drawing the position information and the adjusted image data into an encoder to generate a video file.
7. The video production method according to claim 6, wherein the image data includes: the size of the image;
the adjusting the image data containing the human face in the texture picture according to the preset video animation effect comprises the following steps:
determining an animation effect corresponding to the texture picture according to a preset video animation effect;
copying image data including human faces in the texture pictures according to the number of pictures required for forming animation effects corresponding to the texture pictures to obtain a plurality of image data;
determining the size information of each picture required for forming the animation effect corresponding to the texture picture according to the animation effect corresponding to the texture picture;
and adjusting the image size in the plurality of image data according to the size information of each picture.
8. The video production method according to claim 5, wherein the method for obtaining the region information of the face in the original picture comprises:
identifying the position of the face of an original picture, and obtaining a face frame according to the position of the face of the original picture;
if the original picture comprises a face frame, determining the region information of the face in the original picture according to the shortest distance between the edge of the face frame and the edge corresponding to the original picture; or
And if the original picture comprises a plurality of face frames, determining the region information of the face in the original picture according to the shortest distance between the edge of the largest face frame and the edge corresponding to the original picture.
CN201911205831.1A 2019-11-29 2019-11-29 Mobile terminal and video production method Active CN111031377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911205831.1A CN111031377B (en) 2019-11-29 2019-11-29 Mobile terminal and video production method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911205831.1A CN111031377B (en) 2019-11-29 2019-11-29 Mobile terminal and video production method

Publications (2)

Publication Number Publication Date
CN111031377A CN111031377A (en) 2020-04-17
CN111031377B true CN111031377B (en) 2021-12-24

Family

ID=70203894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911205831.1A Active CN111031377B (en) 2019-11-29 2019-11-29 Mobile terminal and video production method

Country Status (1)

Country Link
CN (1) CN111031377B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184854B (en) * 2020-09-04 2023-11-21 上海硬通网络科技有限公司 Animation synthesis method and device and electronic equipment
CN113422967B (en) * 2021-06-07 2023-01-17 深圳康佳电子科技有限公司 Screen projection display control method and device, terminal equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6878935B2 (en) * 2002-04-03 2005-04-12 General Phosphorix Method of measuring sizes in scan microscopes
CN103164119B (en) * 2013-02-25 2016-06-08 东莞宇龙通信科技有限公司 The adaptive display method of communication terminal and image
CN107277607A (en) * 2017-06-09 2017-10-20 努比亚技术有限公司 A kind of screen picture method for recording, terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN111031377A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111597000B (en) Small window management method and terminal
CN111031377B (en) Mobile terminal and video production method
CN113747199A (en) Video editing method, video editing apparatus, electronic device, storage medium, and program product
CN113409427A (en) Animation playing method and device, electronic equipment and computer readable storage medium
EP4254927A1 (en) Photographing method and electronic device
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN113038141B (en) Video frame processing method and electronic equipment
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
US20230229375A1 (en) Electronic Device, Inter-Device Screen Coordination Method, and Medium
CN113157092B (en) Visualization method, terminal device and storage medium
CN116095413A (en) Video processing method and electronic equipment
CN114979785A (en) Video processing method and related device
CN111163220B (en) Display method, communication terminal and computer storage medium
CN113760164A (en) Display device and response method of control operation thereof
CN113542711A (en) Image display method and terminal
CN111158563A (en) Electronic terminal and picture correction method
CN111324255A (en) Application processing method based on double-screen terminal and communication terminal
CN115334239B (en) Front camera and rear camera photographing fusion method, terminal equipment and storage medium
CN113129238B (en) Photographing terminal and image correction method
CN113253905B (en) Touch method based on multi-finger operation and intelligent terminal
CN111479075B (en) Photographing terminal and image processing method thereof
CN111142648B (en) Data processing method and intelligent terminal
CN115291789B (en) Handwriting fitting method, handwriting fitting device, terminal equipment and medium
CN113741855B (en) Audio playing method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee after: Qingdao Hisense Mobile Communication Technology Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Jiangxi City Road No. 11

Patentee before: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder