CN108184165B - Video playing method, electronic device and computer readable storage medium - Google Patents

Video playing method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN108184165B
CN108184165B CN201711461064.1A CN201711461064A CN108184165B CN 108184165 B CN108184165 B CN 108184165B CN 201711461064 A CN201711461064 A CN 201711461064A CN 108184165 B CN108184165 B CN 108184165B
Authority
CN
China
Prior art keywords
video
frame rate
current scene
shooting
play
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711461064.1A
Other languages
Chinese (zh)
Other versions
CN108184165A (en
Inventor
李小朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711461064.1A priority Critical patent/CN108184165B/en
Publication of CN108184165A publication Critical patent/CN108184165A/en
Application granted granted Critical
Publication of CN108184165B publication Critical patent/CN108184165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video playing method, an electronic device and a computer readable storage medium. The video playing method comprises the following steps: judging whether a moving object appears in the current scene or not in the process of shooting the current scene by the camera according to the shooting frame rate to record the video; recording an initial time stamp and a cut-off time stamp of a moving object when the moving object appears in a current scene; determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp when the video is played; and controlling the video player to play the static part at a first play frame rate and play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate. The video playing method, the electronic device and the computer readable storage medium in the embodiment of the invention can automatically determine the motion segment in the recorded video in the video recording process, and can directly and slowly play the motion segment during playing without manual interception by a user, so that the user experience is better.

Description

Video playing method, electronic device and computer readable storage medium
Technical Field
The present invention relates to the field of video playing technologies, and in particular, to a video playing method, an electronic device, and a computer-readable storage medium.
Background
When the existing mobile phone plays videos, the automatic slow playing function cannot be realized, and the user experience is poor.
Disclosure of Invention
The embodiment of the invention provides a video playing method, an electronic device and a computer readable storage medium.
The invention provides a video playing method which is used for an electronic device. The electronic device comprises a camera and a video player, and the video playing method comprises the following steps:
judging whether a moving object appears in the current scene or not in the process that the camera shoots the current scene according to the shooting frame rate to record the video;
when a moving object appears in the current scene, recording an initial timestamp and a stop timestamp of the moving object;
when the video is played, determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp; and
and controlling the video player to play the static part at a first play frame rate, and controlling the video player to play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate.
The invention provides an electronic device. The electronic device comprises a camera, a video player and a processor. The processor is configured to determine whether a moving object appears in the current scene during a process of shooting the current scene by the camera according to a shooting frame rate to record the video, record an initial timestamp and an end timestamp of the moving object when the moving object appears in the current scene, determine a static portion and a dynamic portion in the video according to the initial timestamp and the end timestamp when the video is played, control the video player to play the static portion at a first playing frame rate, and control the video player to play the dynamic portion at a second playing frame rate, where the second playing frame rate is less than the first playing frame rate.
The invention provides an electronic device. The electronic device includes a camera, a video player, one or more processors, memory, and one or more programs. Wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including instructions for performing the video playback method described above.
The invention provides a computer-readable storage medium. The computer readable storage medium includes a computer program for use in conjunction with an electronic device capable of imaging. The computer program can be executed by a processor to implement the video playing method.
According to the video playing method, the electronic device and the computer readable storage medium, whether a moving object appears in a current scene shot by the camera or not can be identified in the video recording process, the initial time stamp and the ending time stamp of the moving object are recorded when the moving occurs, and therefore the moving segment in the recorded video is automatically determined according to the initial time stamp and the ending time stamp.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flow chart of a video playing method according to some embodiments of the invention.
FIG. 2 is a schematic view of an electronic device according to some embodiments of the invention.
Fig. 3 is a flow chart of a video playing method according to some embodiments of the invention.
Fig. 4 is a flow chart of a video playing method according to some embodiments of the invention.
Fig. 5 is a schematic diagram of the two-frame difference of the video playing method according to some embodiments of the present invention.
Fig. 6 is a schematic diagram of the three-frame difference of the video playing method according to some embodiments of the present invention.
Fig. 7 is a flowchart illustrating a video playing method according to some embodiments of the invention.
Fig. 8 is a flowchart illustrating a video playing method according to some embodiments of the invention.
FIG. 9 is a schematic view of an electronic device according to some embodiments of the invention.
FIG. 10 is a schematic view of an electronic device according to some embodiments of the inventions.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1 to 2, a video playing method according to an embodiment of the invention can be applied to an electronic device 100. The electronic device 100 includes a camera 10 and a video player 20. The video playing method comprises the following steps:
02: in the process that the camera 10 shoots a current scene according to the shooting frame rate to record a video, whether a moving object appears in the current scene is judged;
04: when a moving object appears in the current scene, recording an initial timestamp and a cut-off timestamp of the moving object;
06: when the video is played, determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp; and
08: and controlling the video player 20 to play the static part at a first play frame rate, and controlling the video player 20 to play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate.
Referring to fig. 2 again, the video playing method according to the embodiment of the invention can be implemented by the electronic device 100 according to the embodiment of the invention. The electronic device 100 includes a camera 10, a video player 20, and a processor 30. Step 02, step 04, step 06, and step 08 may all be implemented by processor 30.
That is, the processor 30 is configured to determine whether a moving object occurs in the current scene during the process of shooting the current scene by the camera 10 according to the shooting frame rate to record the video, record an initial timestamp and an end timestamp of the moving object when the moving object occurs in the current scene, determine a static portion and a dynamic portion in the video according to the initial timestamp and the end timestamp when the video is played, control the video player 20 to play the static portion at a first playing frame rate, and control the video player 20 to play the dynamic portion at a second playing frame rate, where the second playing frame rate is less than the first playing frame rate.
It can be understood that when a moving object or scene appears in the video, the user often wants to be able to play the corresponding segment of the moving object or scene slowly, so that the moving details of the moving object or scene can be observed. However, the existing video playing method usually requires a user to manually intercept a motion segment in the video and then perform slow playing on the intercepted motion segment. The video playing mode which can realize slow playing only by manual operation of a user is relatively weak in intelligence, and the user experience is relatively poor.
According to the video playing method, whether a moving object appears in a current scene shot by the camera 10 or not can be identified in the video recording process, the initial time stamp and the ending time stamp of the moving object are recorded when the moving occurs, and therefore the moving segment in the recorded video is automatically determined according to the initial time stamp and the ending time stamp.
In some embodiments, the electronic device 100 may be a mobile phone, a notebook computer, a tablet computer, a smart watch, a smart bracelet, a smart helmet, smart glasses, a single lens reflex camera, or the like.
Referring to fig. 3, in some embodiments, the video playing method according to the embodiments of the present invention further includes:
011: judging whether a user selects a video slow playing mode or not; when the user selects the video slow play mode, step 02 is entered.
Referring back to fig. 2, step 011 can be implemented by processor 30. That is, the processor 30 may be further configured to determine whether the user selects the video slow-play mode, and when the user selects the video slow-play mode, enter a process that the camera 10 shoots the current scene according to the shooting frame rate to record the video, and determine whether a moving object appears in the current scene.
The user can select whether to enter the video slow-play mode or not in various ways. For example, a user touches the touch screen of the electronic device 100 by a touch operation mode to click a virtual key for entering the video slow play mode; or, the user selects a selection key for entering the video slow-play mode displayed on the UI interface of the electronic device 100 through a keyboard or a mouse; alternatively, the user inputs the voice by means of voice, for example, a microphone of the electronic device 100 collects the voice uttered by the user, and after the voice is sent to the processor 30, the processor 30 recognizes that the voice is "use video slow play", and then the electronic device 100 is triggered to enter the video slow play mode.
Therefore, the electronic device 100 can provide a plurality of play modes, such as slow video play and normal video play, for the user, so that the user can have more video watching experiences, and the use satisfaction of the user is improved.
Referring to fig. 3, in some embodiments, the shooting frame rates of the camera 10 include a first shooting frame rate and a second shooting frame rate, wherein the first shooting frame rate is less than the second shooting frame rate. The video playing method of the embodiment of the invention further comprises the following steps:
012: before a moving object does not appear in the current scene, controlling the camera 10 to shoot the current scene according to a first shooting frame rate; and
03: and after a moving object appears in the current scene, controlling the camera 10 to shoot the current scene according to the second shooting frame rate.
Referring back to fig. 2, in some embodiments, step 012 and step 03 can be implemented by processor 30. Even further, the processor 30 may be configured to control the camera 10 to capture the current scene at the first capture frame rate before the moving object does not appear in the current scene, and to control the camera 10 to capture the current scene at the second capture frame rate after the moving object appears in the current scene.
It can be understood that the conventional shooting frame rate for recording video is 30 frames per second (fps), and as continuous smooth pictures can be seen only by human eyes when the picture refreshing speed reaches or exceeds 24fps in the video playing process, the video recorded at the shooting frame rate of 30fps can meet the requirements of picture smoothness on one hand, and the occupation of a storage space by a video file can be reduced on the other hand. However, when a user wants to play a video slowly, the user needs to record the video at a higher shooting frame rate, for example, at a shooting frame rate of 120fps, and play the video at a play frame rate of 30gps, so that on one hand, the original 1 second can play 120 images, and the current 4 seconds are needed to play the video, thereby achieving the slow play effect; on the other hand, a play frame rate of 30fps can also meet the requirement of fluency.
Therefore, before recording a video, the video playing method according to the embodiment of the present invention selects to enter the slow video playing mode, and then the camera 10 starts to record the video at the first shooting frame rate. Assuming that the first photographing frame rate is 30fps, the camera 10 will photograph 30 frames of images within 1 second. When the processor 30 processes the multiple frames of images captured by the camera 10 and determines that a moving scene occurs in the scene, the processor 30 records an initial timestamp of the moving object, and controls the camera 10 to start recording the video at the second frame rate, for example, if the camera 10 captures the video at the 120fps frame rate, the camera 10 captures 120 frames of images within 1 second. When the processor 30 processes the image captured by the camera 10 and determines that the moving object in the scene stops moving, the processor 30 records the stop timestamp of the stop movement of the moving object and controls the camera 10 to record the video at the first frame rate, for example, 30 fps.
Therefore, the dynamic part in one video is recorded at a higher shooting frame rate, and the static part in one video is recorded at a lower shooting frame rate, so that the requirement of slow playing of the video of the dynamic part in the following process can be met, the total number of images in the video file can be reduced, and the storage space occupied by the video file can be further reduced.
Of course, it is also feasible to record video at the second shooting frame rate all the time.
Further, in the video playing process, the first playing frame rate is equal to the playing frame rate when the video is recorded.
Specifically, when the video is recorded at a uniform high shooting frame rate throughout the entire video recording process, for example, at 120fps, the first frame rate corresponding to the static portion is also 120fps during the video playing process. In the video recording process, the static portion is recorded at a lower first shooting frame rate, for example, at 30fps, and the dynamic portion is recorded at a higher second shooting frame rate, for example, at 120fps, so that during the video playing period, the first playing frame rate corresponding to the static portion is also at 30 fps. Therefore, when the user watches the video of the static part, the user can have the experience of constant-speed playing, the user does not have the feeling of fast playing or slow playing, and the watching experience of the user is better.
Referring to fig. 4, in some embodiments, in the process that the camera 10 shoots the current scene at the shooting frame rate to record the video, the step 02 of determining whether a moving object appears in the current scene includes:
021: acquiring a first frame image and a second frame image which are adjacent and obtained by shooting a current scene by a camera 10 according to a shooting frame rate;
022: acquiring a plurality of brightness difference values of the first frame image and the second frame image according to the brightness value of each pixel point in the first frame image and the brightness value of each pixel point in the second frame image;
023: calculating the number of pixel points with the brightness difference value larger than the preset brightness difference value; and
024: and when the number is larger than the preset number, determining that the moving object appears in the current scene.
Referring back to fig. 2, in some embodiments, step 021, step 022, step 023, and capture 024 may all be implemented by processor 30. That is to say, the processor 30 may also be configured to obtain adjacent first frame images and second frame images obtained by shooting the current scene by the camera 10 according to the shooting frame rate, obtain a plurality of luminance difference values between the first frame image and the second frame image according to the luminance value of each pixel in the first frame image and the luminance value of each pixel in the second frame image, calculate the number of pixels with luminance difference values larger than the preset luminance difference value, and determine that a moving object appears in the current scene when the number is larger than the preset number.
The adjacent first frame image and the second frame image should be understood in a broad sense, that is, assuming that the first frame image is the nth frame image, the second frame image adjacent to the nth frame image may be: the (n + 1) th frame image, the (n + 2) th frame image, the (n-1) th frame image, the (n-2) th frame image, the (n-3) th frame image and the like. In a specific embodiment of the present invention, the absolute value of the number of frames that differ between the first frame image and the second frame image must not exceed 4 frames.
Specifically, referring to fig. 5, when the camera 10 shoots a current scene at a shooting frame rate to record a video, a plurality of frames of images can be generated. The processor 30 selects the nth frame image and the (n + 1) th frame image from the plurality of frame images to detect whether a moving object exists in the current scene. Specifically, the processor 30 converts the RGB space-based pixel value of the n-th frame image into the YCrCb space-based pixel value according to the formula Y of 0.2990R +0.5870G +0.1140B, thereby obtaining the luminance value of each pixel in the n-th frame image, and similarly, the processor 30 converts the RGB space-based pixel value of the n + 1-th frame image into the YCrCb space-based pixel value, thereby obtaining the luminance value of each pixel in the n + 1-th frame image. Suppose the brightness value of the pixel point in the nth frame image is Fn(x, y), the brightness value of the pixel point in the nth frame image is Fn+1(x, y), subtracting the brightness values of the pixel points corresponding to the two frames of images and taking the absolute value of the pixel points to obtain a difference image DnDifference image DnThe brightness value of the middle pixel point is Dn=|Fn+1(x,y)-Fn(x, y) |. Subsequently, for the difference image DnEach pixel point in the image is subjected to binarization processing to obtain a binarization image Rn. Specifically, as shown in the formula:
Figure BDA0001530277270000061
difference image DnComparing each pixel point with a preset brightness value, marking the pixel points with the brightness values larger than the preset brightness value as motion points, changing the pixel values to be 255, marking the pixel points with the brightness values smaller than the preset brightness value as background points, and changing the pixel values to be 0. And finally, calculating the number of pixel points with the pixel value of 255, and determining that a moving object appears in the current scene when the number of the pixel points is greater than the preset number.
In the specific embodiment of the invention, a two-frame difference method is adopted to detect whether a moving object appears in the current scene. In some embodiments, a three-frame difference method may also be used to detect whether a moving object is present in the current scene.
Specifically, referring to fig. 6, when the camera 10 shoots a current scene at a shooting frame rate to record a video, a plurality of frames of images can be generated. The processor 30 selects the nth frame image, the (n + 1) th frame image and the (n + 2) th frame image from the plurality of frame images to detect whether a moving object exists in the current scene. Specifically, the processor 30 converts the RGB space-based pixel values of the n-th frame image into YCrCb space-based pixel values according to the formula Y of 0.2990R +0.5870G +0.1140B, thereby obtaining luminance values of the respective pixels in the n-th frame image, and similarly, the processor 30 converts the RGB space-based pixel values of the n + 1-th frame image and the n + 2-th frame image into YCrCb space-based pixel values, thereby obtaining luminance values of the respective pixels in the n + 1-th frame image and the n + 2-th frame image. Suppose that the brightness values of the pixel points in the nth frame image, the (n + 1) th frame image and the (n + 2) th frame image are respectively Fn(x,y)、Fn+1(x, y) and Fn+2(x, y). Subtracting the brightness values of the pixel points corresponding to the n frame image and the n +1 frame image and taking the absolute value to obtain a differential image DnSubtracting the brightness values of the pixel points corresponding to the (n + 1) th frame image and the (n + 2) th frame image and taking the absolute value to obtain a differential image Dn+1. Differential image DnThe brightness value of the middle pixel point is Dn=|Fn+1(x,y)-Fn(x, y) |, difference image Dn+1The brightness value of the middle pixel point is Dn+1=|Fn+2(x,y)-Fn+1(x, y) |. Subsequently, for the difference image DnSum difference image Dn+1The difference image D 'is obtained by the following formula'n:D'n(x,y)=|Fn+1(x,y)-Fn(x,y)|∩|Fn+2(x,y)-Fn+1(x, y) |. Then, the difference image D'nEach pixel point in the image is subjected to binarization processing to obtain a binarization image Rn
Figure BDA0001530277270000071
Mixing difference image D'nComparing each pixel point with a preset brightness value, marking the pixel points with the brightness values larger than the preset brightness value as motion points, changing the pixel values to 255, and reducing the brightness valuesThe pixel points with the preset brightness value are marked as background points, and the pixel values are changed into 0. And finally, calculating the number of pixel points with the pixel value of 255, and determining that a moving object appears in the current scene when the number of the pixel points is greater than the preset number.
Similarly, whether the moving object in the current scene stops moving or not can be judged by the method of inter-frame difference. And when the number of the pixel points with the brightness values larger than the preset brightness value in the differential image is smaller than the preset number, judging that the moving object stops moving.
In this way, whether a moving object appears in the current scene is detected according to the brightness difference between two or more adjacent frames of images, the initial timestamp of the current moment is recorded immediately when the moving object is detected, and the ending timestamp of the current moment is recorded immediately when the moving object is detected to stop.
Referring to fig. 7, in some embodiments, the step 02 of determining whether a moving object appears in the current scene during the process that the camera 10 shoots the current scene at the shooting frame rate to record the video includes:
025: acquiring a multi-frame image obtained by shooting a current scene by a camera 10 to obtain optical flow data; and
026: and judging whether a moving object appears in the current scene according to the optical flow data.
Referring back to FIG. 2, in some embodiments, both step 025 and capture 026 can be implemented by processor 30. That is, the processor 30 is further configured to acquire a plurality of frames of images of the current scene captured by the camera 10 to obtain optical flow data, and determine whether a moving object is present in the current scene according to the optical flow data.
Specifically, the optical flow refers to a speed of a gray pattern movement in a captured image. When the human eye and the observed object move relative to each other, the mirror image of the object forms a series of continuously changing images on the retinal surview (i.e., the plane of the image) and this series of continuously changing image information "flows" through the retina as if it were a "stream" of light, and is therefore referred to as an optical flow. The optical flow is defined on a per pixel basis. The optical flow reflects the changes of objects in the current scene. For a stationary camera 10, moving objects of the current scene form a series of continuously changing images on the imaging plane. Each frame of image captured by the camera 1010 has a plurality of pixels, each having a motion vector, i.e., an optical flow.
In practical applications, when the camera 10 captures a current scene according to a preset frame rate to record a video, a plurality of frames of images are obtained, wherein the captured images of adjacent two frames of images, namely a k (k ∈ N) frame and a k +1 frame are taken as an example, the time interval between capturing the k frame of images and capturing the k +1 frame of images is delta t, when the value of delta t is small, each pixel in the k frame of images can be in one-to-one correspondence with each pixel in the k +1 frame of images, at the moment, the optical flow of each pixel p in the images can be represented as an optical flow vector or a motion vector (u _ p, v _ p), wherein "u _ p" represents the horizontal component of the vector of the pixel p, and "v _ p" represents the vertical component of the pixel p if the optical flow of the adjacent pixel p is on the adjacent frame, the motion vector (p _ p is 0, if the optical flow of the adjacent pixel p is 0, the motion vector of the pixel p _ p is not 0, and if the motion vector of the current frame of the optical flow vector is not 0, the motion vector of the motion vector p is not 0.
Similarly, whether the moving object in the current scene stops moving or not can be detected by an optical flow method. And when the number of the pixels of which the optical flow vectors are not 0 in the two frame images is less than the preset number, judging that the moving object stops moving.
In this way, whether a moving object appears in the current scene is detected according to the optical flow method, and the initial timestamp of the current time is recorded immediately when the moving object is detected, and the ending timestamp of the current time is recorded immediately when the moving object is detected to stop.
In some embodiments, when the processor 30 processes two or more adjacent frames of images and determines that a moving object is present in the current scene, the processor 30 records the capturing time corresponding to one of the two or more adjacent frames of images as the initial timestamp, and when the processor 30 processes two or more adjacent frames of images and determines that the moving object is stopped in the current scene, the processor 30 sets the capturing time corresponding to one of the two or more adjacent frames of images as the cutoff timestamp. In a recorded video segment, there may be multiple pairs of corresponding initial timestamps and ending timestamps, and the processor 30 stores one or more pairs of time pairs in a pre-designed structure array, for example, if the size of the structure array is 2 × 5+1, the structure array can store at most 5 pairs of time pairs, and the last data in the structure array is the start time of the video recording. After the video recording is finished, the processor 30 packages the structure array, the image shot by the camera 10 and the audio recorded by the microphone of the electronic device 100 together to form a video file. When the user wants to play the video file, the processor 30 decapsulates the video file to separate audio data, image data and time pair data, wherein the audio data and the video data are both compression-encoded data, and therefore, the audio data and the image data need to be decoded and restored to uncompressed image data and uncompressed original audio data. And finally, synchronously playing the image data and the audio data which are synchronously decoded, and slowly playing the video according to the time.
Referring back to fig. 8, in some embodiments, the determining the static portion and the dynamic portion of the video according to the initial timestamp and the ending timestamp in step 06 when the video is played comprises:
061: comparing the duration of the video of each dynamic part with the preset duration; and
062: and reclassifying the video of the dynamic part with the time length less than the preset time length as the video of the static part.
Referring back to fig. 2, in some embodiments, both step 061 and step 062 may be implemented by processor 30. That is, the processor 30 may be further configured to compare the duration of the video of each dynamic portion with the preset duration, and to reclassify the video of the dynamic portion having a duration less than the preset duration as the video of the static portion.
Specifically, the video recording process may include a plurality of dynamic portions in which moving objects appear. For example, the video recording starts at 00:00:00, a moving object appears at 00:05:00, stops moving at 00:07:00, appears again at 00:13:00, stops moving at 00:14:30, appears again at 00:21:10, and stops moving at 00:21:13, so that the video segment includes three dynamic portions in which the moving object appears. The duration of the video of the first dynamic part is 2 minutes and the duration of the video of the second dynamic part is 1 minute 30 seconds. The duration of the video of the third dynamic part is 3 seconds. Assuming that the preset time duration is 10 seconds, since 3 seconds is less than 10 seconds, the video of the third dynamic part is reclassified as the video of the static part.
It is understood that when the duration of the video of the dynamic portion is too short, the motion of a moving object in the dynamic portion may not be a subject portion of interest to the user. Therefore, the dynamic part with too short time can be merged into the video of the static part, and the slow playing operation is not executed on the part of the dynamic video during playing. Therefore, when a user watches videos, the problem that the watching experience of the user is influenced due to the fact that the switching frequency between the normal-speed playing and the slow-speed playing is too high can be avoided.
Further, when the duration of the video of the dynamic portion is too long, for example, the user uses the electronic device 100 to record the video of a child playing a football, in this process, the child and the football are usually in a continuous motion state, and the duration of the video of the dynamic portion is longer, if the slow playing operation is continuously performed on the video of the dynamic portion during playing, the watching experience of the user is greatly reduced. Therefore, the recording conditions of the initial timestamp and the ending timestamp can be further defined, that is, under the condition that a moving object appears in the current scene, it is further required to detect whether a specific object appears in the current scene or a plurality of specific objects appear simultaneously, the current time is recorded as the initial timestamp only when the specific object appears or the specific objects appear simultaneously, taking the video that the child kicks the football as an example, it can be further identified whether three specific objects, namely the child, the football and the goal appear simultaneously in the current scene, when the three specific objects appear simultaneously, the time when the three specific objects appear simultaneously is recorded as the initial timestamp, and when any one of the three specific objects disappears in the current scene, the time is recorded as the ending timestamp. It can be understood that when three specific objects of child, football and goal appear simultaneously in the current scene, can be the key process that child wants to shoot, this process has great probably the main part that the user was concerned about, only carries out slow broadcast to this process and can avoid the video of dynamic part for a long time to lead to the user to watch the problem that experiences reduce, still can extract the essence part in the video for the user, further promotes user's watching experience.
Referring to fig. 9, an electronic device 100 according to an embodiment of the present invention includes a camera 10, a video player 20, one or more processors 30, a memory 40, and one or more programs 41. Wherein the one or more programs 41 are stored in the memory 40 and configured to be executed by the one or more processors 30. The program 41 includes instructions for executing the video playback method according to any one of the above embodiments.
For example, program 41 includes instructions for performing the following steps:
a camera 10 and a video player 20. The video playing method comprises the following steps:
02: in the process that the camera 10 shoots a current scene according to the shooting frame rate to record a video, whether a moving object appears in the current scene is judged;
04: when a moving object appears in the current scene, recording an initial timestamp and a cut-off timestamp of the moving object;
06: when the video is played, determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp; and
08: and controlling the video player 20 to play the static part at a first play frame rate, and controlling the video player 20 to play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate.
As another example, program 41 also includes instructions for performing the steps of:
012: before a moving object does not appear in the current scene, controlling the camera 10 to shoot the current scene according to a first shooting frame rate; and
03: and after a moving object appears in the current scene, controlling the camera 10 to shoot the current scene according to the second shooting frame rate.
Referring to fig. 10, a computer-readable storage medium 200 according to an embodiment of the present invention includes a computer program 201 for use with the electronic device 100 capable of capturing images. The computer program 201 is executable by the processor 30 to perform the video playing method according to any of the above embodiments. The computer-readable storage medium 200 may be a storage medium independent from the electronic device 100, or may be a storage medium integrated in the electronic device 100.
For example, the computer program 201 may be executed by the processor 30 to perform the following steps:
02: in the process that the camera 10 shoots a current scene according to the shooting frame rate to record a video, whether a moving object appears in the current scene is judged;
04: when a moving object appears in the current scene, recording an initial timestamp and a cut-off timestamp of the moving object;
06: when the video is played, determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp; and
08: and controlling the video player 20 to play the static part at a first play frame rate, and controlling the video player 20 to play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate.
As another example, the computer program 201 may also be executable by the processor 30 to perform the steps of:
012: before a moving object does not appear in the current scene, controlling the camera 10 to shoot the current scene according to a first shooting frame rate; and
03: and after a moving object appears in the current scene, controlling the camera 10 to shoot the current scene according to the second shooting frame rate.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (16)

1. A video playing method is used for an electronic device, and the electronic device comprises a camera and a video player, and the video playing method comprises the following steps:
judging whether a moving object appears in the current scene or not in the process that the camera shoots the current scene according to the shooting frame rate to record the video;
when a moving object appears in the current scene, judging whether a specific object appears in the current scene;
when the specific object appears in the current scene, recording an initial timestamp and an ending timestamp of the appearance of the specific object;
when the video is played, determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp; and
and controlling the video player to play the static part at a first play frame rate, and controlling the video player to play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate.
2. The video playback method according to claim 1, further comprising:
judging whether a user selects a video slow playing mode or not; and
and when the user selects the video slow playing mode, the step of judging whether a moving object appears in the current scene or not in the process that the camera shoots the current scene according to the shooting frame rate to record the video is carried out.
3. The video playback method according to claim 1, wherein the first playback frame rate is equal to the capture frame rate.
4. The video playback method according to claim 1, wherein the shooting frame rates include a first shooting frame rate and a second shooting frame rate, the first shooting frame rate being smaller than the second shooting frame rate, the video playback method further comprising:
before the specific object does not appear in the current scene, controlling the camera to shoot the current scene according to the first shooting frame rate; and
and after the specific object appears in the current scene, controlling the camera to shoot the current scene according to the second shooting frame rate.
5. The video playing method according to claim 1, wherein the step of determining whether a moving object appears in the current scene during the process of shooting the current scene by the camera according to the shooting frame rate to record the video comprises:
acquiring adjacent first frame images and second frame images obtained by shooting the current scene by the camera according to the shooting frame rate;
acquiring a plurality of brightness difference values of the first frame image and the second frame image according to the brightness value of each pixel point in the first frame image and the brightness value of each pixel point in the second frame image;
calculating the number of pixel points with the brightness difference value larger than a preset brightness difference value; and
and when the number is larger than the preset number, determining that the moving object appears in the current scene.
6. The video playing method according to claim 1, wherein the step of determining whether a moving object appears in the current scene during the process of shooting the current scene by the camera according to the shooting frame rate to record the video comprises:
acquiring a multi-frame image obtained by shooting the current scene by the camera to obtain optical flow data; and
and judging whether the moving object appears in the current scene according to the optical flow data.
7. The video playback method according to claim 1, wherein when the number of the dynamic parts is plural, the video playback method further comprises:
comparing the duration of the video of each dynamic part with a preset duration; and
and reclassifying the video of the dynamic part with the duration less than the preset duration as the video of the static part.
8. An electronic device, comprising a camera, a video player, and a processor; the processor is configured to:
judging whether a moving object appears in the current scene or not in the process that the camera shoots the current scene according to the shooting frame rate to record the video;
when a moving object appears in the current scene, judging whether a specific object appears in the current scene;
when the specific object appears in the current scene, recording an initial timestamp and an ending timestamp of the appearance of the specific object;
when the video is played, determining a static part and a dynamic part in the video according to the initial timestamp and the ending timestamp; and
and controlling the video player to play the static part at a first play frame rate, and controlling the video player to play the dynamic part at a second play frame rate, wherein the second play frame rate is less than the first play frame rate.
9. The electronic device of claim 8, wherein the processor is further configured to:
judging whether a user selects a video slow playing mode or not; and
and when the user selects the video slow playing mode, the step of judging whether a moving object appears in the current scene or not in the process that the camera shoots the current scene according to the shooting frame rate to record the video is carried out.
10. The electronic device of claim 8, wherein the first frame rate is equal to the capture frame rate.
11. The electronic device of claim 8, wherein the frame rate comprises a first frame rate and a second frame rate, the first frame rate being less than the second frame rate, the processor further configured to:
before the specific object does not appear in the current scene, controlling the camera to shoot the current scene according to the first shooting frame rate; and
and after the specific object appears in the current scene, controlling the camera to shoot the current scene according to the second shooting frame rate.
12. The electronic device of claim 8, wherein the processor is further configured to:
acquiring adjacent first frame images and second frame images obtained by shooting the current scene by the camera according to the shooting frame rate;
acquiring a plurality of brightness difference values of the first frame image and the second frame image according to the brightness value of each pixel point in the first frame image and the brightness value of each pixel point in the second frame image;
calculating the number of pixel points with the brightness difference value larger than a preset brightness difference value; and
and when the number is larger than the preset number, determining that the moving object appears in the current scene.
13. The electronic device of claim 8, wherein the processor is further configured to:
acquiring a multi-frame image obtained by shooting the current scene by the camera to obtain optical flow data; and
and judging whether the moving object appears in the current scene according to the optical flow data.
14. The electronic device of claim 8, wherein the processor is further configured to:
comparing the duration of the video of each dynamic part with a preset duration; and
and reclassifying the video of the dynamic part with the duration less than the preset duration as the video of the static part.
15. An electronic device, comprising:
a camera;
a video player;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the video playback method of any of claims 1-7.
16. A computer-readable storage medium comprising a computer program for use in conjunction with an electronic device capable of capturing images, the computer program being executable by a processor to perform the video playback method of any of claims 1 to 7.
CN201711461064.1A 2017-12-28 2017-12-28 Video playing method, electronic device and computer readable storage medium Active CN108184165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711461064.1A CN108184165B (en) 2017-12-28 2017-12-28 Video playing method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711461064.1A CN108184165B (en) 2017-12-28 2017-12-28 Video playing method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108184165A CN108184165A (en) 2018-06-19
CN108184165B true CN108184165B (en) 2020-08-07

Family

ID=62548377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711461064.1A Active CN108184165B (en) 2017-12-28 2017-12-28 Video playing method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108184165B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965705B (en) * 2018-07-19 2020-04-10 北京微播视界科技有限公司 Video processing method and device, terminal equipment and storage medium
CN113766313B (en) * 2019-02-26 2024-03-05 深圳市商汤科技有限公司 Video data processing method and device, electronic equipment and storage medium
CN110769325B (en) * 2019-09-17 2021-12-07 咪咕动漫有限公司 Video shooting method, system, electronic equipment and storage medium
CN112532865B (en) * 2019-09-19 2022-07-19 华为技术有限公司 Slow-motion video shooting method and electronic equipment
CN112312043A (en) * 2020-10-20 2021-02-02 深圳市前海手绘科技文化有限公司 Optimization method and device for deriving animation video
CN112532880B (en) * 2020-11-26 2022-03-11 展讯通信(上海)有限公司 Video processing method and device, terminal equipment and storage medium
CN112653920B (en) * 2020-12-18 2022-05-24 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN114339431B (en) * 2021-12-16 2023-09-01 杭州当虹科技股份有限公司 Time-lapse coding compression method
CN114390236A (en) * 2021-12-17 2022-04-22 云南腾云信息产业有限公司 Video processing method, video processing device, computer equipment and storage medium
CN116996639A (en) * 2023-02-13 2023-11-03 深圳Tcl新技术有限公司 Screen-projection frame rate acquisition method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1234681A (en) * 1998-04-28 1999-11-10 Lg电子株式会社 Moving-image self-adaptive speed-displaying automatic control apparatus and method thereof
CN101529890A (en) * 2006-10-24 2009-09-09 索尼株式会社 Imaging device and reproduction control device
CN101600107A (en) * 2009-07-08 2009-12-09 杭州华三通信技术有限公司 Adjust the method, system and device of video record broadcasting speed
CN102957864A (en) * 2011-08-24 2013-03-06 奥林巴斯映像株式会社 Imaging device and control method thereof
CN104519294A (en) * 2013-09-27 2015-04-15 杭州海康威视数字技术股份有限公司 Mobile information-based video recording playback method and device thereof
CN104811798A (en) * 2015-04-17 2015-07-29 广东欧珀移动通信有限公司 Method and device for regulating video playing speed

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102398835B1 (en) * 2015-09-10 2022-05-17 엘지전자 주식회사 Mobile terminal and method of controlling the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1234681A (en) * 1998-04-28 1999-11-10 Lg电子株式会社 Moving-image self-adaptive speed-displaying automatic control apparatus and method thereof
CN101529890A (en) * 2006-10-24 2009-09-09 索尼株式会社 Imaging device and reproduction control device
CN101600107A (en) * 2009-07-08 2009-12-09 杭州华三通信技术有限公司 Adjust the method, system and device of video record broadcasting speed
CN102957864A (en) * 2011-08-24 2013-03-06 奥林巴斯映像株式会社 Imaging device and control method thereof
CN104519294A (en) * 2013-09-27 2015-04-15 杭州海康威视数字技术股份有限公司 Mobile information-based video recording playback method and device thereof
CN104811798A (en) * 2015-04-17 2015-07-29 广东欧珀移动通信有限公司 Method and device for regulating video playing speed

Also Published As

Publication number Publication date
CN108184165A (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN108184165B (en) Video playing method, electronic device and computer readable storage medium
US8774592B2 (en) Media reproduction for audio visual entertainment
US9781379B2 (en) Media recording for audio visual entertainment
US7952596B2 (en) Electronic devices that pan/zoom displayed sub-area within video frames in response to movement therein
US9426409B2 (en) Time-lapse video capture with optimal image stabilization
KR101737857B1 (en) Method and device for controlling playing and electronic equipment
JP5305557B2 (en) Method for viewing audiovisual records at a receiver and receiver for viewing such records
US8957899B2 (en) Image processing apparatus and method for controlling the same
US8620142B2 (en) Video player and video playback method
JP5107806B2 (en) Image processing device
US9324376B2 (en) Time-lapse video capture with temporal points of interest
WO2008069344A1 (en) Output apparatus, output method and program
JP3240871B2 (en) Video summarization method
US11350076B2 (en) Information processing apparatus, information processing method, and storage medium
JP3327520B2 (en) Shooting method with NG warning function, shooting apparatus with NG warning function, and recording medium storing shooting program
US9661217B2 (en) Image capturing apparatus and control method therefor
JP6289591B2 (en) Image processing apparatus, image processing method, and program
JP2009055156A (en) Image reproducing device, imaging device, image reproducing method, and computer program
CN114827477B (en) Method, device, electronic equipment and medium for time-lapse photography
JP7175941B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD FOR IMAGE PROCESSING APPARATUS, AND PROGRAM
JP4506190B2 (en) VIDEO DISPLAY DEVICE, VIDEO DISPLAY METHOD, VIDEO DISPLAY METHOD PROGRAM, AND RECORDING MEDIUM CONTAINING VIDEO DISPLAY METHOD PROGRAM
US20210029306A1 (en) Magnification enhancement of video for visually impaired viewers
JP4934066B2 (en) Information generating apparatus, information generating method, and information generating program
JP2017225037A (en) Image processing apparatus and imaging apparatus
JP6270784B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant