WO2021073336A1 - A system and method for creating real-time video - Google Patents

A system and method for creating real-time video Download PDF

Info

Publication number
WO2021073336A1
WO2021073336A1 PCT/CN2020/115365 CN2020115365W WO2021073336A1 WO 2021073336 A1 WO2021073336 A1 WO 2021073336A1 CN 2020115365 W CN2020115365 W CN 2020115365W WO 2021073336 A1 WO2021073336 A1 WO 2021073336A1
Authority
WO
WIPO (PCT)
Prior art keywords
preview frame
camera preview
camera
interest
area
Prior art date
Application number
PCT/CN2020/115365
Other languages
French (fr)
Inventor
Nitin SETIA
Sunil Kumar
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Publication of WO2021073336A1 publication Critical patent/WO2021073336A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Definitions

  • the present invention generally relates to the field of image analysis, and more particularly, to systems and methods for creating real-time video.
  • the videos now-a-days are not only recorded for the purpose of personal use but its importance is considerably increased in other fields also.
  • the videos are being used as a mode of communication and as well as a mode of expression.
  • the videos are also used as an advertising platform to target digital market at a very large scale in very small time interval.
  • a number of inputs are required from the user’s end.
  • a number of additional hardware and software are required to adjust a video as per the desired results.
  • the video is either recorded by stopping the recording and then re-starting recording after changing the angle manually.
  • the video is also recorded by implementing multiple cameras at different angles and thereafter merging and cropping multiple recorded clips using various software.
  • a typical example of such an issue occurs while live streaming of a cricket match, wherein videos from multiple angles and multiple zoom percentage are taken by different cameras, that are later cropped and merged using various software and are shown to the users.
  • multiple videos are pre-recorded using multiple camera units, considering in focus multiple factors like area of interests, identified objects, imaging parameters etc. using a dedicated camera unit for each dedicated input. Thereafter, the required video is generated using these multiple dedicated video clips using various software.
  • One aspect of the present invention relates to a method for creating a real-time video.
  • the said method comprising: receiving a camera preview frame to record a video. Thereafter, the camera preview frame is divided into at least two blocks.
  • the method then comprises performing, by the processing unit [104] , an image analysis onf the camera preview frame to identify at least one object and an area of interest, and subsequently, determining, by the processing unit [104] , at least one block of the camera preview frame based on the at least one object and the area of interest.
  • the method includes generating, by the processing unit [104] , a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
  • the system further comprising: a camera unit configured to receive a camera preview frame, wherein said camera preview frame is associated with an aspect ratio.
  • the system further comprises a processing unit, connected to said camera unit, the processing unit configured to: divide the camera preview frame into at least two blocks; perform an image analysis on said camera preview frame to identify at least one object and an area of interest; modify the camera preview frame to determine at least one block of camera preview frame; and encode said block of camera preview frame in said aspect ratio to generate encoded video of said block of camera preview frame.
  • Yey aspect of the present disclosure encompasses a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to carry out any of the method for creating real-time video.
  • FIG. 1 illustrates a block diagram of a system for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 2 illustrates a block diagram of camera unit [102] , in accordance with exemplary embodiments of the present disclosure.
  • FIG. 3 illustrates an exemplary method flow diagram [300] , for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 4 illustrates an exemplary method of implementation of present invention [400] , for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 5 illustrates an exemplary user interface diagram [500] , depicting a celebration to create a real-time video of birthday event, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 6 illustrates an exemplary user interface diagram [600] , depicting a conference event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 7 illustrates an exemplary user interface diagram [700] , depicting a gaming event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 8 illustrates an exemplary user interface diagram [800] , depicting a wildlife scenario to create a real-time video of the depicted scene, in accordance with exemplary embodiments of the present disclosure.
  • FIG. 9 illustrates another system for creating real-time video according to embodiments
  • circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
  • well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
  • an object of the present invention to provide a system and method for creating real-time video. It is another object of the invention to implement single camera solution for capturing multiple different angle frames. It is also an object of the invention to overcome the need to stop and start recording to capture multiple frames. It is yet another object of the invention to avoid the need of zooming to highlight some parts of camera preview frame. It is also an object of the present invention to provide better object coverage with intelligent solution as human eye cannot focus on other objects while covering one.
  • One another object of the present invention is to provide automatic video making with the desired area of interest as opposed to manual video making and merging in the prior art.
  • the present disclosure provides a method and system for creating real-time video.
  • the present invention provides a method and system for creating real-time video or images, of point or area of interest.
  • the invention provides for image analysis on a camera preview frame to identify point or area of interest in the preview frame and/or take user’s inputs for such identification.
  • the invention then enables encoding of the particular video frame comprising the identified point or area of interest to generate a real-time image or video.
  • the image or video generated by the invention is such that it has the same aspect ratio as of the original preview frame thus preserving the quality of the video which is otherwise distorted or degraded using prior art solutions.
  • the invention also encompasses detection or identification of point or area of interest dynamically so as to capture changing area of interests as well as multiple areas of interests while recording.
  • the present invention provides a method for creating a real-time video, the method comprising: receiving, at a camera unit [102] , a camera preview frame to record a video; dividing, by a processing unit [104] , the camera preview frame into at least two blocks; performing, by the processing unit [104] , an image analysis onf the camera preview frame to identify at least one object and an area of interest; determining, by the processing unit [104] , at least one block of the camera preview frame based on the at least one object and the area of interest; and generating, by the processing unit [104] , a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
  • dividing the camera preview frame into the at least two blocks comprises: determining a minimum recording resolution; and dividing the camera preview frame into the at least two blocks based on the minimum recording resolution.
  • each of the devided at least two blocks has a same aspect ratio as the camera preview frame
  • encoding the at least one block of the camera preview frame comprises: encoding the at least one block of the camera preview frame in such a manner that the encoded at least one block of the camera preview frame still has the same aspect ratio as the camera preview frame.
  • the method further comprising the following before encoding the at least one block of the camera preview frame: compressing, by the processing unit [104] , a resolution of the determined at least one block of the camera preview frame to the minimum recording resolution.
  • the method further includes the following before dividing the camera preview frame into the at least two blocks: determinging and adjusting, by the processing unit [104] , a camera preview resolution of the camera preview frame.
  • the method further includes displaying, at a display unit [106] , the encoded video with the identified at least one object and the area of interest.
  • performing the image analysis on the camera preview frame further to identify the at least one object and the area of interest comprises at least one of: receiving at least one user interaction to identify the at least one object and the area of interest; and automatically identifying the at least one object and the area of interest.
  • receiving at least one user interaction to identify the at least one object comprises:
  • the priority of user interaction is higher than automatic identification.
  • automatically identifying area of interest and object comprises: identifying the at least one object and the area of interest dynamically according to a scene corresponding to the camera preview frame or a voice accosiated with the camera preview frame.
  • the present invention further provides a system for creating real-time video, and the system includes a camera unit [102] and a processing unit [104] .
  • the camera unit [102] is configured to receive a camera preview frame, where the camera preview frame is associated with an aspect ratio.
  • the processing unit [104] is connected to the camera unit [102] , and the processing unit configured to:
  • the camera preview frame into at least two blocks, perform an image analysis on the camera preview frame to identify at least one object and an area of interest, modify the camera preview frame to determine at least one block of camera preview frame, and encode the at least one block of camera preview frame in the aspect ratio to generate encoded video of the at least one block of camera preview frame.
  • the system further comprises a display unit [106] , connected to the camera unit [102] and processing unit [104] , and configured to display the encoded video with the at least one object and the area of interest.
  • the processing unit [104] is further configured to determine and adjust, a preview resolution of the camera preview frame.
  • the processing unit [104] is further configured to compress the resolution of the determined at least one block of camera preview frame to a minimum recording resolution.
  • the present invention further provides non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to carry out any of the method for creating a real-time video.
  • the “camera preview frame” comprises at least one real-time preview of an event picked up by the camera sensor unit. Further, the said real-time preview of an event comprises the preview of at least one real-time imaging parameter.
  • camera preview frame may refer to the preview generated by a camera and can be seen on the display of a user device when the user opens a camera application.
  • the “imaging parameters” comprises one or more parameters of a scene, an exposure, a face area, an ISO value etc.
  • image/media analysis refers to determination of one or more imaging parameters and/or identification of at least one face, area of interest, point of interest, object of interest, etc.
  • aspects ratio refers to the ratio of the width to the height of the camera preview frame/image/video.
  • a “processing unit” or “processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions.
  • a processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc.
  • the processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
  • a “display unit” or “display” includes one or more computing device for displaying camera preview frame, images or videos generated by user/electronic devices.
  • the said display unit may be an additional hardware coupled to the said electronic device or may be integrated with in the electronic device.
  • the display unit may further include but not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.
  • auser device may be any electrical, electronic, electromechanical and computing device or equipment.
  • the user device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device in which a camera can be implemented.
  • the user device contains at least one input means configured to receive an input from the user, a processor and a display unit configured to display at least the camera preview frame, media, etc. to the user.
  • the system [100] comprises, at least one camera unit [102] , at least one processing unit [104] , at least one display unit [106] . All of these components/units are connected to each other, however, the same has not been shown in Fig. 1 for the sake of clarity.
  • the said system [100] is configured to provide at least one real-time video with the help of the said interconnection between the said camera unit [102] , the said processing unit [104] and the said display unit [106] .
  • the camera unit [102] is configured to receive at least one camera preview frame, said camera preview frame being associated with an aspect ratio. Further, the camera preview frame comprises real-time data with respect to the current events occurring in the surrounding environment. A video may then be created from this real-time data of the camera preview frame.
  • the components and functions of the camera unit [102] is further discussed with reference to Fig. 2.
  • the processing unit [104] is configured to divide the preview resolution of camera preview frame into at least two blocks and to perform an image analysis on the camera preview frame in order to identify at least one object and an area of interest, wherein said image analysis includes identification of at least one of, face in real-time, area of interest, object, real-time imaging parameters associated with the said camera preview frame and the like parameters.
  • the processing unit [104] is also configured to modify the camera preview frame to determine at least one block of camera preview frame comprising at least one of a, point of interest, identified object and the like imaging parameters.
  • the processing unit [104] is also configured encode the said block of camera preview frame in real-time in accordance with the said point of interest and said aspect ratio, wherein the said point of interest can be changed during recording of video and multiple point of interest can be encoded in accordance with the invention to select point of interest dynamically.
  • the said processing unit [104] is configured to encode, at least one real-time video from the camera preview frame, wherein the said encoding is in accordance with at least one point of interest.
  • the calculation of area/point of interest may be based at least on existing algorithms for machine learning, scene detection, voice direction, detection of area of interest and the like parameters.
  • the scene detection may further comprises detection of at least one of the event such as party, birthday, wedding concert, stage, wildlife, beach and the like.
  • the area of interest may include for example, birthday party cake cutting, wedding dress and wedding couple, dice and mike detection and decision of location, stage and performer detection (both music and dance) and the like.
  • the said encoding further comprises adjusting the preview resolution of the camera preview frame to the highest possible resolution with respect to the camera unit [102] . Further, the said adjusted the preview resolution of the camera preview frame divided into blocks to perform the image analysis on the preview frame, wherein the said image analysis is done to find at least one object and area of interest and said division in blocks is such that each small box has same aspect ratio as full preview frame/camera preview frame.
  • the display unit [106] is configured to display a cropped frame from camera preview frame/overall field of view. Further, the video is encoded on the basis of the buffer of said blocks as one frame, wherein the said blocks are best suited blocks and are compressed to a minimum recording resolution prior to said video encoding.
  • the display unit [106] is configured to display the camera preview frame, image generated by the processing unit [104] and video encoded by the processing unit [104] along with the area of interest and identified objects.
  • the said display unit [106] may be an additional hardware coupled to the said electronic/user device or may be integrated with in the user/electronic device.
  • the display unit [106] may further include but not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.
  • the system [100] as shown in Fig. 1 may reside in the electronic device/user device.
  • the invention also encompasses that the processing unit [104] of the system [100] resides at a remote server, while the camera unit [102] and the display unit [106] resides in the user device, such that the camera preview frame is captured by the camera unit [102] at the user device and sent to the processing unit [104] at the remote server for processing.
  • FIG. 2 illustrates a block diagram of camera unit [102] , in accordance with exemplary embodiments of the present invention.
  • the said camera unit [102] comprises, at least one camera sensor unit [202] , at least one camera driver [204] , at least one camera HAL [206] , at least one camera framework [208] and at least one camera preview frame unit [210] .
  • the camera sensor unit [202] is configured to pick up the events in the surrounding of the camera unit [202] as raw real-time data.
  • the camera driver [204] is configured to collect the raw real-time data from the said camera sensor unit [202] and provide the same to the camera HAL [206] .
  • the camera HAL [206] is configured to process the said collected real-time data and provide the same to the camera preview frame unit [210] .
  • the camera preview frame unit [210] configured to provide a graphical user interface to the user to provide a preview of the camera preview frame.
  • the invention encompasses that the camera preview frame unit [210] is configured to display the camera preview frame on the display unit [106] of the electronic/user device.
  • the camera preview frame unit [210] is further configured to display at least one real-time encoded video in accordance with at least one of the, real-time video point of interest frame, identified object and like imaging parameters associated with camera preview frame.
  • the camera sensor unit [202] also comprises at least one light sensitive processing unit configured to measure and process the imaging parameters of the camera preview frame.
  • the camera framework [208] is configured to, provide a module to interact with the said camera sensor unit [202] , said camera driver [204] , said camera HAL [206] and the said camera preview frame Unit [210] .
  • the said camera framework [208] is further configured to store files for input data, processing and the guiding mechanism.
  • FIG. 3 illustrates an exemplary method flow diagram [300] depicting method for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
  • the invention encompasses that the method begins at step [302] .
  • the method begins when the user selects the automatic/intelligent recording mode, wherein the user may select automatic/intelligent recording mode to create real-time video by in input means.
  • the user may select automatic/intelligent recording mode by selecting an icon on the camera preview frame.
  • Such selection of automatic/intelligent recording mode may occur during the recording of a video or when the user is about to start recording.
  • an indication of the selected automatic/intelligent recording mode is displayed to the user on the camera preview frame. For instance, an icon ‘auto-record’ may be shown over the camera preview frame, or an icon ‘mannual-record’ may be shown over the camera preview frame.
  • the method at step [304] comprises, receiving, at least one camera preview frame to record a video.
  • the said camera preview frame is associated with an aspect ratio.
  • the camera preview frame may be received at the processing unit [104] from the camera unit [102] .
  • the camera preview frame provides the at least one real-time preview of an event picked up by the camera sensor unit [202] .
  • camera preview frame may refer to the preview generated by a camera and can be seen on the display unit [106] of a user device when the user opens a camera application, wherein in an instance the said generated preview of said camera preview frame is adjusted at maximum possible resolution.
  • step [306] wherein the preview resolution of said camera preview frame is divided into at least two blocks.
  • the said division of preview resolution into blocks is achieved such that the aspect ratio of each small block is same as the aspect ratio of camera preview frame/overall field of view.
  • the camera preview frame may be divided into 4 or 16 blocks. Each block represents a small view that is required to be analysed by the processing unit [104] .
  • the method at step [308] comprises, performing an image analysis on said camera preview frame to identify at least one object and an area of interest.
  • the invention encompasses that the image analysis on said camera preview frame comprises receiving at least one user interaction to identify the object and area of interest.
  • the invention also encompasses automatica identification of area of interest and object.
  • the automatic identification includes identifying at least one face from said camera preview frame, object, scene, voice direction, and the like parameters, in ordert o identify the object and area of interest.
  • the if there is a user intervention for identifying area of interest and object the said user intervention is taken into consideration on priority over the said automatic identification.
  • the user intervention of identifying the area of interest and object may be in the form of user selecting a manual mode of recording.
  • the user intervention is detected when although the recording is taking place in auto mode, the user attempts to zoom the video or set the focus to a particular area or object.
  • the invention encompasses switching between the auto and the manual mode while recording the video.
  • the method thereafter leads to step [310] and at step [310] the method comprises modifying, the camera preview frame, to determine at least one block of camera preview frame with at least one of identified object, area of interest.
  • the said modification is achieved by cropping the camera preview frame/overall field of view in real-time, wherein the said cropping is achieved with respect to the identified, area of interest, object and/or the other associated parameters of the camera preview frame.
  • the display of display unit [106] is changed as cropped frame from overall field of view which further comprises the identified area of interest and/or object. Therefore, the user is able to see those frames on the camera preview frame that include the area of interest /object only, i.e. excluding other views or frames in the original camera preview frame.
  • the method at step [312] comprises encoding, said block of camera preview frame in said aspect ratio, to generate encoded video of said block of camera preview frame, wherein the said block of camera preview frame comprises the best suited block compressed to the minimum recording resolution prior to said encoding.
  • the best suited block of camera preview frame is the modified/cropped camera preview frame comprising at least one of a manually/automatically identified, area of interest, object and the other related parameter of camera preview frame.
  • step [314] Thereafter the method further upon encoding a real-time video based at least on manually/automatically identified, area of interest, object and the other related parameter of camera preview frame terminates at step [314] .
  • the invention also encompasses storing the encoded real-time video at the user device.
  • the invention also encompasses displaying the encoded video at the display unit [106] .
  • FIG. 4 refers to an exemplary method of implementation of present invention [400] , for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
  • the method at step [402] comprises receiving a camera preview frame, wherein the said camera preview frame is associated with a real-time preview of an event picked up by the camera sensor unit [202] .
  • the said real-time preview of an event comprises the preview of at least one real-time imaging parameter, area of interest, object and the associated parameters of camera preview frame.
  • camera preview frame may refer to the preview generated by a camera and can be seen on the display unit [106] of a user device when the user opens a camera application.
  • the method at step [404] comprises starting video recording and the method at step [406] further comprises getting/receiving a preview resolution related to said camera preview frame and further adjusting/setting the said preview resolution to the maximum possible resolution.
  • the method further comprises dividing said preview resolution into NxN blocks, wherein N may be one of, 1, 2, 3, 4........ or so on. The division of said preview resolution is such that each block has the same aspect ratio as that of the camera preview frame/overall field of view.
  • the method further at step [410] comprises, image analysis on said preview, wherein image analysis includes identification of at least one of, face in real-time, area of interest, object, real-time imaging parameters associated with the said camera preview frame and the like associated parameters.
  • the method at step [412] comprises, user interaction to manually get objects, area of interest and other related parameters. In an instance while recording any celebrations like birthday party, user may manually focus the main specific object, area of interest and other related parameters without zooming manually and operating it by himself to record the object focused video when object is in frame.
  • the method comprises automatic image analysis and processing to get objects, area of interest and other related parameters of camera preview frame.
  • the calculation of area/point of interest may be based at least on existing algorithms for machine learning, scene detection, voice direction, detection of area of interest and the like parameters.
  • the scene detection may further comprises detection of at least one of the event such as party, birthday, wedding concert, stage, wildlife, beach and the like.
  • the area of interest may be for example, birthday party cake cutting, wedding dress and wedding couple, dice and mike detection and decision of location, stage and performer detection (both music and dance) and the like.
  • the if there is a user intervention for identifying area of interest and object the said user intervention is taken into consideration on priority over the said automatic identification.
  • the method at step [416] comprises, changing display and encoding resolution calculated from area of interest, objects and other parameters in same aspect ratio as that of the overall field of view/camera preview frame.
  • the said change in display comprises modifying/cropping the overall field of view with respect to at least one of the area of interest, object and other imaging parameters related to said camera preview frame.
  • the resolution of the camera preview frame is adjusted to the maximum and the resolution of the small divided blocks is compressed to the minimum before encoding the video based on said blocks.
  • the aspect ratio of each small divided block is similar to the aspect ratio of overall field of view/camera preview frame.
  • step [418] configured to receive/get final encoded video with area of Interest, objects and other parameters.
  • an exemplary user interface diagram [500] depicting a celebration to create a real-time video of birthday event, in accordance with exemplary embodiments of the present disclosure is shown.
  • the video of the given event of birthday celebration is generated by implementing the present invention.
  • the given exemplary user interface diagram [500] comprises a camera preview frame/overall field of view [502] of the said birthday event, point of interest [504] and encoded block of said camera preview frame [506] .
  • point of interest [504] In an instance there can be multiple point of interests [504] in a given event.
  • the user interface diagram indicates that, the camera preview frame [502] is being accessed in the automatic/intelligent recording mode of the present invention. Thereafter, the image analysis on the said camera preview frame [502] , is done by first dividing the camera preview frame [502] into small blocks, wherein the said division is achieved considering the point of interest [504] in focus.
  • the best suited blocks comprising point of interest /object [504] i.e. the cake, is identified in accordance with the invention and such best-suited blocks are further used as a single frame to encode the real-time video using said single frame.
  • the camera preview frame [502] is divided into 16 small blocks and thereafter the 6 best suited small blocks comprising point of interest object of interest [504] are further considered to encode the real-time video.
  • the aspect ratio of said small blocks are same as the aspect ratio of full preview of camera preview frame [502] , thus preserving the video quality.
  • the said best suited small blocks are compressed to minimum recording resolution prior to encoding the video in accordance with the single frame of said best suited blocks.
  • the generated final real-time video comprising, encoded block of said camera preview frame focused on the object of interest is shown by [506] .
  • FIG. 6 an exemplary user interface diagram [600] , depicting a conference event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure is shown.
  • the given exemplary user interface diagram [600] indicates the video generation of a conference event by implementing the present invention.
  • the said user interface further comprises a camera preview frame/overall field of view [602] of the said conference event, point of interest [604] and encoded block of said camera preview frame [606] .
  • the camera preview frame/overall field of view [602] is divided into 16 small blocks, in accordance with the invention.
  • the said division of camera preview frame [602] is achieved to further assist the image analysis on said camera preview frame [602] , wherein the image analysis is being done as per the point/area of interest [604] .
  • the area of interest in this instance i.e. the person delivering the speech, is identified in accordance with the invention.
  • a single point of interest [604] is shown in the given example, there can be the multiple point of interests [604] and other relevant parameters like specific object/objects or imaging parameters can be taken into consideration.
  • the best suited blocks comprising point of interest [604] are further used as a single frame to encode the real-time video using said single frame.
  • the aspect ratio of the single frame comprising the best-suited blocks is same as that of the preview of camera preview frame [602] .
  • the said camera preview frame [602] is further modified/cropped with respect to the single frame of best suited blocks, comprising point of interest [604] , such that while recording the video the user is able to see the cropped video frame as shown in [606] .
  • the two small blocks as shown in [606] comprising point of interest [604] are then used to encode the real-time video with respect to point of interest frames. Therefore, the given exemplary user interface diagram [600] , indicates a generated final real-time video comprising, encoded block of said camera preview frame [606] .
  • FIG. 7 illustrates an exemplary user interface diagram [700] , depicting a gaming event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure.
  • the video of the given gaming event is generated by implementing the present invention.
  • the given exemplary user interface diagram [700] comprises a camera preview frame/overall field of view [702] of the said gaming event, point of interest [704] and encoded block of said camera preview frame [706] .
  • the camera preview frame [702] indicates overall field view of a gaming event with a specific point of interest [704] .
  • the camera preview frame is divided into 16 small blocks to further perform the image analysis on said blocks.
  • the division of the camera preview frame [702] may be achieved by diving the said camera preview frame [702] in various combination of different small blocks, wherein the order of said division is NxN.
  • the value of N varies from 1, 2, 3, 4isingso on, considering possible number of small portions/blocks.
  • the division is done in a manner such that the aspect ratio of each said small block should be same as the aspect ratio of preview of said camera preview frame [702] .
  • the point of interest [704] is a player catching a ball and in order to record a video comprising the said player/point of interest [704] in focus, the said video is being recorded in automatic/intelligent recording mode in accordance with the present invention.
  • the given user interface at [706] indicates that the said camera preview frame [702] considering in focus the point of interest [704] further divided into four small blocks having the same aspect ratio as that of the preview of the camera preview frame [702] , wherein the said four small blocks are the suitable small blocks comprising point of interest [704] to encode the real-time video with respect to point of interest frames.
  • These four small divided blocks comprising the point of interest [704] are then considered as a single frame to encode/generate the real-time video.
  • the said camera preview frame [702] is further modified/cropped with respect to the single frame of best suited blocks, comprising point of interest [704] .
  • the given user interface indicates an encoded block of said camera preview frame [706] , wherein the said encoded block of said camera preview frame [706] comprises the said point of interest [704] i.e. player catching a ball in the given user interface. Therefore, in the given user interface, the real-time video is generated by considering point of interest/player catching a ball [704] in focus and the said generated video is in same aspect ratio as that of the overall field of view [702] .
  • an exemplary user interface diagram [800] depicting a wildlife scenario to create a real-time video of the depicted scene, in accordance with exemplary embodiments of the present disclosure is shown.
  • the video of a wildlife scenario is generated by implementing the present invention.
  • the given exemplary user interface diagram [800] comprises a camera preview frame/overall field of view [802] of the said wildlife scenario, point of interest [804] and encoded block of said camera preview frame [806] .
  • the camera preview frame [802] indicates overall field view of a wildlife scene with a specific point of interest [804] .
  • the point of interest [804] is a lion hunting a deer and in order to record a video comprising the said lion/point of interest [804] in focus, the said video is being recorded in automatic/intelligent recording mode in accordance with the present invention.
  • the point of interest [804] is difficult to focus while recording a real-time video, therefore in order to record the said real-time video comprising point of interest [804] in focus, said video may be captured using the present invention.
  • the camera preview frame [802] comprising point of interest [804] is being divided into 16 small blocks. Thereafter, image analysis is performed on the said small blocks with respect to the point of interest [804] .
  • the said small blocks are compressed to a minimum recording resolution and further used as a single frame to encode a real time video, wherein the said single frame comprises small blocks having area of interest in focus (i.e. suitable small blocks) and aspect ratio same as the aspect ratio of preview of camera preview frame [802] .
  • the encoded block of said camera preview frame [806] indicates a frame of small encoded blocks, wherein the said frame further comprises four small suitable blocks.
  • the best suited blocks collectively as a single frame, comprising point of interest [804] , are used to encode/generate the real-time video comprising said single frame. Therefore, in the given user interface, the real-time video is generated by considering point of interest [804] in focus and the said generated video is in same aspect ratio as that of the overall field of view/camera preview frame [802] .
  • FIG. 9 is a block diagram illustrating another system for creating real-time video according to embodiments.
  • the system includes a housing (not illustrated) , a memory 901, and a central processing unit (CPU) 902 (also referred to as a processor, hereinafter CPU for short) , a circuit board (not illustrated) , and a power supply circuit (not illustrated) .
  • the circuit board is disposed inside a space defined by the housing.
  • the CPU 902 and the memory 901 are disposed on the circuit board.
  • the power supply circuit is configured to supply power to each circuit or component of the system.
  • the memory 901 is configured to store executable program codes.
  • the CPU 902 is configured to run a computer program corresponding to the executable program codes by reading the executable program codes stored in the memory 901 to carry out: receiving a camera preview frame to record a video; dividing the camera preview frame into at least two blocks; performing an image analysis on the camera preview frame to identify at least one object and an area of interest; determining at least one block of the camera preview frame based on the at least one object and the area of interest; generating a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
  • the system further includes a peripheral interface 903, a radio frequency (RF) circuit 905, an audio circuit 906, a speaker 911, a power management chip 908, an input/output (I/O) subsystem 909, other input/control devices 910, a touch screen 912, other input/control devices 910, and an external port 904, which communicate with each other via one or more communication buses or signal lines 907.
  • RF radio frequency
  • I/O input/output
  • system 900 illustrated is just exemplary and the system 900 can have more or fewer components than those illustrated in FIG. 9.
  • two or more components can be combined, or different component configurations can be adopted in the system.
  • the various components illustrated in FIG. 9 can be implemented in hardware, software, or a combination thereof including one or more signal processing and/or application specific integrated circuits.
  • the following describes a mobile phone as an example of the system for creating real-time video.
  • the memory 901 is accessible to the CPU 902, the peripheral interface 903, and so on.
  • the memory 901 can include a high-speed random access memory and can further include a non-transitory memory such as one or more magnetic disk storage devices, flash memory devices, or other transitory solid-state memory devices.
  • the peripheral interface 903 is configured to connect the input and output peripherals of the device to the CPU 902 and the memory 901.
  • the I/O subsystem 909 is configured to connect the input and the output peripherals such as the touch screen 912 and other input/control devices 910 to the peripheral interface 903.
  • the I/O subsystem 909 can include a display controller 9091 and one or more input controllers 9092 configured to control other input/control devices 910.
  • the one or more input controllers 9092 are configured to receive electrical signals from or send electrical signals to other input/control devices 910, where other input/control devices 910 can include a physical button (apress button, a rocker button, etc. ) , a dial, a slide switch, a joystick, or a click wheel.
  • the input controller 9092 can be coupled with any of a keyboard, an infrared port, a universal serial bus (USB) interface, and a pointing apparatus such as a mouse.
  • USB universal serial bus
  • the touch screen 912 functions as an input interface and an output interface between a system and a user, and is configured to display a visual output to the user.
  • the visual output can include graphics, text, icons, videos, and the like.
  • the display controller 9091 in the I/O subsystem 909 is configured to receive an electrical signal from or send an electrical signal to the touch screen 912.
  • the touch screen 912 is configured to detect contact on the touch screen.
  • the display controller 9091 is configured to convert the contact detected into an interaction with a user interface object displayed on the touch screen 912, that is, to realize human-computer interaction.
  • the user interface object displayed on the touch screen 912 can be an icon of a running game, an icon indicating connection to corresponding networks, and the like.
  • the device can also include a light mouse, which is a touch-sensitive surface that does not display a visual output, or can be an extension of a touch-sensitive surface formed by the touch screen.
  • the RF circuit 905 is configured to establish communication between a mobile phone and a wireless network (i.e. network side) , to transmit and receive data between the mobile phone and the wireless network, such as transmitting and receive short messages, emails, and the like.
  • the RF circuit 905 is configured to receive and transmit RF signals (also known as electromagnetic signals) , to convert an electrical signal into an electromagnetic signal or convert an electromagnetic signal into an electrical signal, and to communicate with a communication network and other devices through electromagnetic signals.
  • the RF circuit 905 can include known circuits for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) , and so on.
  • an antenna system an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) , and so on.
  • CDEC coder-decoder
  • SIM subscriber identity module
  • the audio circuit 906 is configured to receive audio data from the peripheral interface 903, to convert the audio data into an electrical signal, and to transmit the electrical signal to the speaker 911.
  • the speaker 911 is configured to restore the voice signal received by the mobile phone from the wireless network via the RF circuit 905 to sound and to play the sound to the user.
  • the power management chip 908 is configured for power supply and power management of the hardware connected to the CPU 902, the I/O subsystem 909, and the peripheral interfaces 903.
  • the system for creating real-time video and the storage medium can execute the method for creating real-time video of any of the above embodiments and have corresponding functional modules and advantages of executing the method.
  • the system for creating real-time video and the storage medium can execute the method for creating real-time video of any of the above embodiments and have corresponding functional modules and advantages of executing the method.
  • the units, interfaces, modules, and/or components depicted in the figures and described herein may be present in the form of a hardware, a software and a combination thereof. Connection/sshown between these units/components/modules/interfaces in the exemplary system architecture may interact with each other through various wired links, wireless links, logical links and/or physical links. Further, the units/components/modules/interfaces may be connected in other possible ways.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

Embodiments of the present disclosure may relate to methods and systems of creating real-time video. The invention encompasses receiving, at least one camera preview frame to record a video, said camera preview frame being associated with an aspect ratio. Thereafter the invention further comprises dividing, preview resolution of said camera preview frame into at least two blocks and performing an image analysis on said camera preview frame to identify at least one object and an area of interest. Further the present invention encompasses modifying, the camera preview frame, to determine at least one block of camera preview frame, said modification being based on the area of interest and identified object and thereafter encoding, said block of camera preview frame in said aspect ratio, to generate encoded video of said block of camera preview frame.

Description

A SYSTEM AND METHOD FOR CREATING REAL-TIME VIDEO
FIELD OF INVENTION
The present invention generally relates to the field of image analysis, and more particularly, to systems and methods for creating real-time video.
BACKGROUND
This section is intended to provide information relating to the field of the invention and thus any approach or functionality described below should not be assumed to be qualified as prior art merely by its inclusion in this section.
Over the past few years the trend of recording videos has increased to a great extent. The videos now-a-days are not only recorded for the purpose of personal use but its importance is considerably increased in other fields also. For an instance, the videos are being used as a mode of communication and as well as a mode of expression. In another instance, the videos are also used as an advertising platform to target digital market at a very large scale in very small time interval. In order to record a video according to a particular situation/scenario a number of inputs are required from the user’s end. In some instances a number of additional hardware and software are required to adjust a video as per the desired results.
There are some instances where it is not possible to capture a video at multiple angles using a single device. In such cases the video is either recorded by stopping the recording and then re-starting recording after changing the angle manually. Alternatively, the video is also recorded by implementing multiple cameras at different angles and thereafter merging and cropping multiple recorded clips using various software. A typical example of such an issue occurs while live streaming of a cricket match, wherein videos from multiple angles and multiple zoom percentage are taken by different cameras, that are later cropped and merged using various software and are shown to the users.
Furthermore, in order to capture a video by considering a particular object, multiple objects, area of interest, imaging parameters or the like parameters, in focus, there are less chances to get the desired video output as the human eye cannot focus on other objects while covering a single object. As the per the prior art solutions, a user is required to zoom to the particular object to be focused on, while recording video. This is however, not an ideal solution since the zoomed video is captured in a poor aspect ratio thereby affecting the quality of the recorded video.
In another prior art solution, multiple videos are pre-recorded using multiple camera units, considering in focus multiple factors like area of interests, identified objects, imaging parameters etc. using a dedicated camera unit for each dedicated input. Thereafter, the required video is generated using these multiple dedicated video clips using various software.
There are no current solutions to provide a real-time video recording in accordance with at least one of the area of interests, identified objects, imaging parameters and the like real-time data input. Therefore, there is a need to alleviate problems existing in the prior art and develop a more efficient solution for providing a real-time video.
SUMMARY
This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
One aspect of the present invention relates to a method for creating a real-time video. The said method comprising: receiving a camera preview frame to record a video. Thereafter, the camera preview frame is divided into at least two blocks. The method then comprises performing, by the processing unit [104] , an image analysis onf the camera preview frame to identify at least one object and an area of interest, and subsequently, determining, by the processing unit [104] , at least one block of the camera preview frame based on the at least one object and the area of interest. Lastly, the method includes generating, by the processing unit [104] , a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
Another aspect of the present disclosure encompasses a system for creating real-time video. The said system further comprising: a camera unit configured to receive a camera preview frame, wherein said camera preview frame is associated with an aspect ratio. The system further comprises a processing unit, connected to said camera unit, the processing unit configured to: divide the camera preview frame into at least two blocks; perform an image analysis on said camera preview frame to identify at least one object and an area of interest; modify the camera preview frame to determine at least one block of camera preview frame; and encode said block of camera preview frame in said aspect ratio to generate encoded video of said block of camera preview frame.
Yey aspect of the present disclosure encompasses a non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to carry out any of the method for creating real-time video.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
FIG. 1 illustrates a block diagram of a system for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
FIG. 2 illustrates a block diagram of camera unit [102] , in accordance with exemplary embodiments of the present disclosure.
FIG. 3 illustrates an exemplary method flow diagram [300] , for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
FIG. 4 illustrates an exemplary method of implementation of present invention [400] , for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
FIG. 5 illustrates an exemplary user interface diagram [500] , depicting a celebration to create a real-time video of birthday event, in accordance with exemplary embodiments of the present disclosure.
FIG. 6 illustrates an exemplary user interface diagram [600] , depicting a conference event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure.
FIG. 7 illustrates an exemplary user interface diagram [700] , depicting a gaming event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure.
FIG. 8 illustrates an exemplary user interface diagram [800] , depicting a wildlife scenario to create a real-time video of the depicted scene, in accordance with exemplary embodiments of the present disclosure.
FIG. 9 illustrates another system for creating real-time video according to embodiments
The foregoing shall be more apparent from the following more detailed description of the disclosure.
DESCRIPTION OF THE INVENTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure.
In order to overcome the existing limitations of the known solutions, it is an object of the present invention to provide a system and method for creating real-time video. It is another object of the invention to implement single camera solution for capturing multiple different angle frames. It is also an object of the invention to overcome the need to stop and start recording to capture multiple frames. It is  yet another object of the invention to avoid the need of zooming to highlight some parts of camera preview frame. It is also an object of the present invention to provide better object coverage with intelligent solution as human eye cannot focus on other objects while covering one. One another object of the present invention is to provide automatic video making with the desired area of interest as opposed to manual video making and merging in the prior art.
In order to achieve the afore-mentioned objectives, the present disclosure provides a method and system for creating real-time video.
The present invention provides a method and system for creating real-time video or images, of point or area of interest. The invention provides for image analysis on a camera preview frame to identify point or area of interest in the preview frame and/or take user’s inputs for such identification. The invention then enables encoding of the particular video frame comprising the identified point or area of interest to generate a real-time image or video. The image or video generated by the invention is such that it has the same aspect ratio as of the original preview frame thus preserving the quality of the video which is otherwise distorted or degraded using prior art solutions. The invention also encompasses detection or identification of point or area of interest dynamically so as to capture changing area of interests as well as multiple areas of interests while recording.
Specifically, the present invention provides a method for creating a real-time video, the method comprising: receiving, at a camera unit [102] , a camera preview frame to record a video; dividing, by a processing unit [104] , the camera preview frame into at least two blocks; performing, by the processing unit [104] , an image analysis onf the camera preview frame to identify at least one object and an area of interest; determining, by the processing unit [104] , at least one block of the camera preview frame based on the at least one object and the area of interest; and generating, by the processing unit [104] , a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
As an implementation, dividing the camera preview frame into the at least two blocks comprises: determining a minimum recording resolution; and dividing the camera preview frame into the at least two blocks based on the minimum recording resolution.
As an implementation, each of the devided at least two blocks has a same aspect ratio as the camera preview frame, and encoding the at least one block of the camera preview frame comprises: encoding the at least one block of the camera preview frame in such a manner that the encoded at least one block of the camera preview frame still has the same aspect ratio as the camera preview frame.
As an implementation, the method further comprising the following before encoding the at least one block of the camera preview frame: compressing, by the processing unit [104] , a resolution of the determined at least one block of the camera preview frame to the minimum recording resolution.
As an implementation, the method further includes the following before dividing the camera preview frame into the at least two blocks: determinging and adjusting, by the processing unit [104] , a camera preview resolution of the camera preview frame.
As an implementation, the method further includes displaying, at a display unit [106] , the encoded video with the identified at least one object and the area of interest.
As an implementation, performing the image analysis on the camera preview frame further to identify the at least one object and the area of interest comprises at least one of: receiving at least one user  interaction to identify the at least one object and the area of interest; and automatically identifying the at least one object and the area of interest.
As an implementation, receiving at least one user interaction to identify the at least one object comprises:
receiving at least one user interaction to determining the at least one object; and determinging an area around the at least one object as the area of interest.
As an implementation, the priority of user interaction is higher than automatic identification.
As an implementation, automatically identifying area of interest and object comprises: identifying the at least one object and the area of interest dynamically according to a scene corresponding to the camera preview frame or a voice accosiated with the camera preview frame.
The present invention further provides a system for creating real-time video, and the system includes a camera unit [102] and a processing unit [104] .
The camera unit [102] is configured to receive a camera preview frame, where the camera preview frame is associated with an aspect ratio.
The processing unit [104] is connected to the camera unit [102] , and the processing unit configured to:
divide the camera preview frame into at least two blocks, perform an image analysis on the camera preview frame to identify at least one object and an area of interest, modify the camera preview frame to determine at least one block of camera preview frame, and encode the at least one block of camera preview frame in the aspect ratio to generate encoded video of the at least one block of camera preview frame.
As an implementation, the system further comprises a display unit [106] , connected to the camera unit [102] and processing unit [104] , and configured to display the encoded video with the at least one object and the area of interest.
As an implementation, the processing unit [104] is further configured to determine and adjust, a preview resolution of the camera preview frame.
As an implementation, the processing unit [104] is further configured to compress the resolution of the determined at least one block of camera preview frame to a minimum recording resolution.
The present invention further provides non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to carry out any of the method for creating a real-time video.
As used herein, the “camera preview frame” comprises at least one real-time preview of an event picked up by the camera sensor unit. Further, the said real-time preview of an event comprises the preview of at least one real-time imaging parameter. For instance, camera preview frame may refer to the preview generated by a camera and can be seen on the display of a user device when the user opens a camera application.
As used herein, the “imaging parameters” comprises one or more parameters of a scene, an exposure, a face area, an ISO value etc. As used herein, the “image/media analysis” refers to determination of one or more imaging parameters and/or identification of at least one face, area of interest, point of interest, object of interest, etc.
As used herein, “aspect ratio” refers to the ratio of the width to the height of the camera preview frame/image/video.
As used herein, a “processing unit” or “processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of  microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
As used herein, a “display unit” or “display” includes one or more computing device for displaying camera preview frame, images or videos generated by user/electronic devices. The said display unit may be an additional hardware coupled to the said electronic device or may be integrated with in the electronic device. The display unit may further include but not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.
As used herein, “auser device” , may be any electrical, electronic, electromechanical and computing device or equipment. The user device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device in which a camera can be implemented. The user device contains at least one input means configured to receive an input from the user, a processor and a display unit configured to display at least the camera preview frame, media, etc. to the user.
Referring to FIG. 1 an architecture of system [100] , is shown in accordance with exemplary embodiments of the present invention. The system [100] comprises, at least one camera unit [102] , at least one processing unit [104] , at least one display unit [106] . All of these components/units are connected to each other, however, the same has not been shown in Fig. 1 for the sake of clarity. The said system [100] , is configured to provide at least one real-time video with the help of the said interconnection between the said camera unit [102] , the said processing unit [104] and the said display unit [106] .
The camera unit [102] is configured to receive at least one camera preview frame, said camera preview frame being associated with an aspect ratio. Further, the camera preview frame comprises real-time data with respect to the current events occurring in the surrounding environment. A video may then be created from this real-time data of the camera preview frame. The components and functions of the camera unit [102] is further discussed with reference to Fig. 2.
The processing unit [104] is configured to divide the preview resolution of camera preview frame into at least two blocks and to perform an image analysis on the camera preview frame in order to identify at least one object and an area of interest, wherein said image analysis includes identification of at least one of, face in real-time, area of interest, object, real-time imaging parameters associated with the said camera preview frame and the like parameters. The processing unit [104] is also configured to modify the camera preview frame to determine at least one block of camera preview frame comprising at least one of a, point of interest, identified object and the like imaging parameters. The processing unit [104] is also configured encode the said block of camera preview frame in real-time in accordance with the said point of interest and said aspect ratio, wherein the said point of interest can be changed during recording of video and multiple point of interest can be encoded in accordance with the invention to select point of interest dynamically.
In an example, the said processing unit [104] is configured to encode, at least one real-time video from the camera preview frame, wherein the said encoding is in accordance with at least one point of interest. Further, in yet another example the calculation of area/point of interest may be based at least on existing algorithms for machine learning, scene detection, voice direction, detection of area of interest and the like parameters. The scene detection may further comprises detection of at least one of the event such as party, birthday, wedding concert, stage, wildlife, beach and the like. The area of interest may include  for example, birthday party cake cutting, wedding dress and wedding couple, dice and mike detection and decision of location, stage and performer detection (both music and dance) and the like.
In another example, the said encoding further comprises adjusting the preview resolution of the camera preview frame to the highest possible resolution with respect to the camera unit [102] . Further, the said adjusted the preview resolution of the camera preview frame divided into blocks to perform the image analysis on the preview frame, wherein the said image analysis is done to find at least one object and area of interest and said division in blocks is such that each small box has same aspect ratio as full preview frame/camera preview frame.
Thereafter upon identification of at least one of an object and area of interest, the display unit [106] , in accordance with said identified at least one object and/or area of interest, is configured to display a cropped frame from camera preview frame/overall field of view. Further, the video is encoded on the basis of the buffer of said blocks as one frame, wherein the said blocks are best suited blocks and are compressed to a minimum recording resolution prior to said video encoding.
The display unit [106] is configured to display the camera preview frame, image generated by the processing unit [104] and video encoded by the processing unit [104] along with the area of interest and identified objects. The said display unit [106] may be an additional hardware coupled to the said electronic/user device or may be integrated with in the user/electronic device. The display unit [106] may further include but not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.
The system [100] as shown in Fig. 1 may reside in the electronic device/user device. The invention also encompasses that the processing unit [104] of the system [100] resides at a remote server, while the camera unit [102] and the display unit [106] resides in the user device, such that the camera preview frame is captured by the camera unit [102] at the user device and sent to the processing unit [104] at the remote server for processing.
Referring now to FIG. 2 that illustrates a block diagram of camera unit [102] , in accordance with exemplary embodiments of the present invention. The said camera unit [102] comprises, at least one camera sensor unit [202] , at least one camera driver [204] , at least one camera HAL [206] , at least one camera framework [208] and at least one camera preview frame unit [210] .
The camera sensor unit [202] is configured to pick up the events in the surrounding of the camera unit [202] as raw real-time data. The camera driver [204] is configured to collect the raw real-time data from the said camera sensor unit [202] and provide the same to the camera HAL [206] . The camera HAL [206] is configured to process the said collected real-time data and provide the same to the camera preview frame unit [210] . The camera preview frame unit [210] configured to provide a graphical user interface to the user to provide a preview of the camera preview frame. The invention encompasses that the camera preview frame unit [210] is configured to display the camera preview frame on the display unit [106] of the electronic/user device. The camera preview frame unit [210] is further configured to display at least one real-time encoded video in accordance with at least one of the, real-time video point of interest frame, identified object and like imaging parameters associated with camera preview frame.
The camera sensor unit [202] also comprises at least one light sensitive processing unit configured to measure and process the imaging parameters of the camera preview frame.
Further, the camera framework [208] is configured to, provide a module to interact with the said camera sensor unit [202] , said camera driver [204] , said camera HAL [206] and the said camera preview frame Unit [210] . The said camera framework [208] is further configured to store files for input data, processing and the guiding mechanism.
FIG. 3 illustrates an exemplary method flow diagram [300] depicting method for creating real-time video, in accordance with exemplary embodiments of the present disclosure. The invention encompasses that the method begins at step [302] . In an embodiment, the method begins when the user selects the automatic/intelligent recording mode, wherein the user may select automatic/intelligent recording mode to create real-time video by in input means. For instance, the user may select automatic/intelligent recording mode by selecting an icon on the camera preview frame. Such selection of automatic/intelligent recording mode may occur during the recording of a video or when the user is about to start recording. In such an embodiment, an indication of the selected automatic/intelligent recording mode is displayed to the user on the camera preview frame. For instance, an icon ‘auto-record’ may be shown over the camera preview frame, or an icon ‘mannual-record’ may be shown over the camera preview frame.
The method at step [304] comprises, receiving, at least one camera preview frame to record a video. The said camera preview frame is associated with an aspect ratio. The camera preview frame may be received at the processing unit [104] from the camera unit [102] . The camera preview frame provides the at least one real-time preview of an event picked up by the camera sensor unit [202] . For instance, camera preview frame may refer to the preview generated by a camera and can be seen on the display unit [106] of a user device when the user opens a camera application, wherein in an instance the said generated preview of said camera preview frame is adjusted at maximum possible resolution.
Thereafter the method leads to step [306] , wherein the preview resolution of said camera preview frame is divided into at least two blocks. The said division of preview resolution into blocks is achieved such that the aspect ratio of each small block is same as the aspect ratio of camera preview frame/overall field of view. In an embodiment, the camera preview frame may be divided into 4 or 16 blocks. Each block represents a small view that is required to be analysed by the processing unit [104] .
Further the method at step [308] comprises, performing an image analysis on said camera preview frame to identify at least one object and an area of interest. The invention encompasses that the image analysis on said camera preview frame comprises receiving at least one user interaction to identify the object and area of interest. The invention also encompasses automatica identification of area of interest and object. The automatic identification includes identifying at least one face from said camera preview frame, object, scene, voice direction, and the like parameters, in ordert o identify the object and area of interest. In an instance, the if there is a user intervention for identifying area of interest and object, the said user intervention is taken into consideration on priority over the said automatic identification. The user intervention of identifying the area of interest and object may be in the form of user selecting a manual mode of recording. In another instance, the user intervention is detected when although the recording is taking place in auto mode, the user attempts to zoom the video or set the focus to a particular area or object. The invention encompasses switching between the auto and the manual mode while recording the video.
The method thereafter leads to step [310] and at step [310] the method comprises modifying, the camera preview frame, to determine at least one block of camera preview frame with at least one of identified object, area of interest. In an instance the said modification is achieved by cropping the camera preview frame/overall field of view in real-time, wherein the said cropping is achieved with respect to the identified, area of interest, object and/or the other associated parameters of the camera preview frame. Further, in accordance with said cropping the display of display unit [106] is changed as cropped frame from overall field of view which further comprises the identified area of interest and/or object. Therefore, the user is able to see those frames on the camera preview frame that include the area of interest /object only, i.e. excluding other views or frames in the original camera preview frame.
Next, the method at step [312] comprises encoding, said block of camera preview frame in said aspect ratio, to generate encoded video of said block of camera preview frame, wherein the said block of camera preview frame comprises the best suited block compressed to the minimum recording resolution prior to said encoding. The best suited block of camera preview frame is the modified/cropped camera preview frame comprising at least one of a manually/automatically identified, area of interest, object and the other related parameter of camera preview frame.
Thereafter the method further upon encoding a real-time video based at least on manually/automatically identified, area of interest, object and the other related parameter of camera preview frame terminates at step [314] .
The invention also encompasses storing the encoded real-time video at the user device. The invention also encompasses displaying the encoded video at the display unit [106] .
FIG. 4 refers to an exemplary method of implementation of present invention [400] , for creating real-time video, in accordance with exemplary embodiments of the present disclosure.
The method at step [402] comprises receiving a camera preview frame, wherein the said camera preview frame is associated with a real-time preview of an event picked up by the camera sensor unit [202] . Further the said real-time preview of an event comprises the preview of at least one real-time imaging parameter, area of interest, object and the associated parameters of camera preview frame. For instance, camera preview frame may refer to the preview generated by a camera and can be seen on the display unit [106] of a user device when the user opens a camera application.
Thereafter the method at step [404] comprises starting video recording and the method at step [406] further comprises getting/receiving a preview resolution related to said camera preview frame and further adjusting/setting the said preview resolution to the maximum possible resolution. Thereafter, at step [408] the method further comprises dividing said preview resolution into NxN blocks, wherein N may be one of, 1, 2, 3, 4…….. or so on. The division of said preview resolution is such that each block has the same aspect ratio as that of the camera preview frame/overall field of view.
In an instance if the total resolution is 5120x2880 and resolution of each small block is 1280x720 (minimum) , the number of small portions possible are:
16 –1280 x 720 i.e. A/4 x B/4
9 –2560 x 1440 i.e. A/2 x B/2
4 –3480 x 2160 i.e. 3A/4 x 3B/4
1 –5120 x 2880 i.e. A x B
Total 30 combination of different blocks are taken from field of view. These are 30 small views that need to be analysed.
The method further at step [410] comprises, image analysis on said preview, wherein image analysis includes identification of at least one of, face in real-time, area of interest, object, real-time imaging parameters associated with the said camera preview frame and the like associated parameters. Thereafter, the method at step [412] comprises, user interaction to manually get objects, area of interest and other related parameters. In an instance while recording any celebrations like birthday party, user may manually focus the main specific object, area of interest and other related parameters without zooming manually and operating it by himself to record the object focused video when object is in frame.
Further at step [414] , the method comprises automatic image analysis and processing to get objects, area of interest and other related parameters of camera preview frame. In an example, the calculation of area/point of interest may be based at least on existing algorithms for machine learning, scene detection, voice direction, detection of area of interest and the like parameters. The scene detection may further comprises detection of at least one of the event such as party, birthday, wedding concert,  stage, wildlife, beach and the like. The area of interest may be for example, birthday party cake cutting, wedding dress and wedding couple, dice and mike detection and decision of location, stage and performer detection (both music and dance) and the like. In an instance, the if there is a user intervention for identifying area of interest and object, the said user intervention is taken into consideration on priority over the said automatic identification.
Thereafter the method at step [416] comprises, changing display and encoding resolution calculated from area of interest, objects and other parameters in same aspect ratio as that of the overall field of view/camera preview frame. Further, the said change in display comprises modifying/cropping the overall field of view with respect to at least one of the area of interest, object and other imaging parameters related to said camera preview frame. In an instance, the resolution of the camera preview frame is adjusted to the maximum and the resolution of the small divided blocks is compressed to the minimum before encoding the video based on said blocks. The aspect ratio of each small divided block is similar to the aspect ratio of overall field of view/camera preview frame.
Thereafter the method further at step [418] configured to receive/get final encoded video with area of Interest, objects and other parameters.
Referring to FIG. 5, an exemplary user interface diagram [500] , depicting a celebration to create a real-time video of birthday event, in accordance with exemplary embodiments of the present disclosure is shown.
In accordance with the given exemplary user interface diagram [500] , the video of the given event of birthday celebration is generated by implementing the present invention. The given exemplary user interface diagram [500] , comprises a camera preview frame/overall field of view [502] of the said birthday event, point of interest [504] and encoded block of said camera preview frame [506] . In an instance there can be multiple point of interests [504] in a given event.
Further the user interface diagram indicates that, the camera preview frame [502] is being accessed in the automatic/intelligent recording mode of the present invention. Thereafter, the image analysis on the said camera preview frame [502] , is done by first dividing the camera preview frame [502] into small blocks, wherein the said division is achieved considering the point of interest [504] in focus. In this instance, the best suited blocks comprising point of interest /object [504] , i.e. the cake, is identified in accordance with the invention and such best-suited blocks are further used as a single frame to encode the real-time video using said single frame. As shown in the given exemplary user interface diagram, the camera preview frame [502] is divided into 16 small blocks and thereafter the 6 best suited small blocks comprising point of interest object of interest [504] are further considered to encode the real-time video. The aspect ratio of said small blocks are same as the aspect ratio of full preview of camera preview frame [502] , thus preserving the video quality. In an instance the said best suited small blocks are compressed to minimum recording resolution prior to encoding the video in accordance with the single frame of said best suited blocks. The generated final real-time video comprising, encoded block of said camera preview frame focused on the object of interest is shown by [506] .
Referring now to FIG. 6, an exemplary user interface diagram [600] , depicting a conference event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure is shown.
The given exemplary user interface diagram [600] , indicates the video generation of a conference event by implementing the present invention. The said user interface further comprises a camera preview frame/overall field of view [602] of the said conference event, point of interest [604] and encoded block of said camera preview frame [606] .
The camera preview frame/overall field of view [602] is divided into 16 small blocks, in accordance with the invention. The said division of camera preview frame [602] is achieved to further assist the image analysis on said camera preview frame [602] , wherein the image analysis is being done as per the point/area of interest [604] . The area of interest in this instance, i.e. the person delivering the speech, is identified in accordance with the invention. Although a single point of interest [604] is shown in the given example, there can be the multiple point of interests [604] and other relevant parameters like specific object/objects or imaging parameters can be taken into consideration. Thereafter as per the given user interface at step [606] , the best suited blocks comprising point of interest [604] are further used as a single frame to encode the real-time video using said single frame. The aspect ratio of the single frame comprising the best-suited blocks is same as that of the preview of camera preview frame [602] . The said camera preview frame [602] is further modified/cropped with respect to the single frame of best suited blocks, comprising point of interest [604] , such that while recording the video the user is able to see the cropped video frame as shown in [606] .
The two small blocks as shown in [606] comprising point of interest [604] are then used to encode the real-time video with respect to point of interest frames. Therefore, the given exemplary user interface diagram [600] , indicates a generated final real-time video comprising, encoded block of said camera preview frame [606] .
FIG. 7 illustrates an exemplary user interface diagram [700] , depicting a gaming event to create a real-time video of the event, in accordance with exemplary embodiments of the present disclosure. In accordance with the given exemplary user interface diagram [700] , the video of the given gaming event is generated by implementing the present invention. The given exemplary user interface diagram [700] , comprises a camera preview frame/overall field of view [702] of the said gaming event, point of interest [704] and encoded block of said camera preview frame [706] .
In the given user interface, the camera preview frame [702] , indicates overall field view of a gaming event with a specific point of interest [704] . The camera preview frame is divided into 16 small blocks to further perform the image analysis on said blocks. In an instance the division of the camera preview frame [702] may be achieved by diving the said camera preview frame [702] in various combination of different small blocks, wherein the order of said division is NxN. In yet another instance the value of N varies from 1, 2, 3, 4……so on, considering possible number of small portions/blocks. Further, the division is done in a manner such that the aspect ratio of each said small block should be same as the aspect ratio of preview of said camera preview frame [702] .
Further as shown in the user interface, in this example, the point of interest [704] is a player catching a ball and in order to record a video comprising the said player/point of interest [704] in focus, the said video is being recorded in automatic/intelligent recording mode in accordance with the present invention.
Further, the given user interface at [706] , indicates that the said camera preview frame [702] considering in focus the point of interest [704] further divided into four small blocks having the same aspect ratio as that of the preview of the camera preview frame [702] , wherein the said four small blocks are the suitable small blocks comprising point of interest [704] to encode the real-time video with respect to point of interest frames. These four small divided blocks comprising the point of interest [704] , are then considered as a single frame to encode/generate the real-time video. The said camera preview frame [702] is further modified/cropped with respect to the single frame of best suited blocks, comprising point of interest [704] .
Further the given user interface indicates an encoded block of said camera preview frame [706] , wherein the said encoded block of said camera preview frame [706] comprises the said point of interest  [704] i.e. player catching a ball in the given user interface. Therefore, in the given user interface, the real-time video is generated by considering point of interest/player catching a ball [704] in focus and the said generated video is in same aspect ratio as that of the overall field of view [702] .
Referring to FIG. 8, an exemplary user interface diagram [800] , depicting a wildlife scenario to create a real-time video of the depicted scene, in accordance with exemplary embodiments of the present disclosure is shown.
In accordance with the given exemplary user interface diagram [800] , the video of a wildlife scenario is generated by implementing the present invention. The given exemplary user interface diagram [800] , comprises a camera preview frame/overall field of view [802] of the said wildlife scenario, point of interest [804] and encoded block of said camera preview frame [806] .
In the given user interface the camera preview frame [802] , indicates overall field view of a wildlife scene with a specific point of interest [804] . Further as shown in the user interface the point of interest [804] is a lion hunting a deer and in order to record a video comprising the said lion/point of interest [804] in focus, the said video is being recorded in automatic/intelligent recording mode in accordance with the present invention.
In an instance when the lion/point of interest [804] is far away and is in action like hunting, the point of interest [804] is difficult to focus while recording a real-time video, therefore in order to record the said real-time video comprising point of interest [804] in focus, said video may be captured using the present invention.
As indicated in the given user interface the camera preview frame [802] comprising point of interest [804] is being divided into 16 small blocks. Thereafter, image analysis is performed on the said small blocks with respect to the point of interest [804] . In an instance the said small blocks are compressed to a minimum recording resolution and further used as a single frame to encode a real time video, wherein the said single frame comprises small blocks having area of interest in focus (i.e. suitable small blocks) and aspect ratio same as the aspect ratio of preview of camera preview frame [802] . As per the given user interface, the encoded block of said camera preview frame [806] indicates a frame of small encoded blocks, wherein the said frame further comprises four small suitable blocks.
Thereafter the best suited blocks collectively as a single frame, comprising point of interest [804] , are used to encode/generate the real-time video comprising said single frame. Therefore, in the given user interface, the real-time video is generated by considering point of interest [804] in focus and the said generated video is in same aspect ratio as that of the overall field of view/camera preview frame [802] .
FIG. 9 is a block diagram illustrating another system for creating real-time video according to embodiments. The system includes a housing (not illustrated) , a memory 901, and a central processing unit (CPU) 902 (also referred to as a processor, hereinafter CPU for short) , a circuit board (not illustrated) , and a power supply circuit (not illustrated) . The circuit board is disposed inside a space defined by the housing. The CPU 902 and the memory 901 are disposed on the circuit board. The power supply circuit is configured to supply power to each circuit or component of the system. The memory 901 is configured to store executable program codes. The CPU 902 is configured to run a computer program corresponding to the executable program codes by reading the executable program codes stored in the memory 901 to carry out: receiving a camera preview frame to record a video; dividing the camera preview frame into at least two blocks; performing an image analysis on the camera preview frame to identify at least one object and an area of interest; determining at least one block of the camera preview frame based on the  at least one object and the area of interest; generating a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
The system further includes a peripheral interface 903, a radio frequency (RF) circuit 905, an audio circuit 906, a speaker 911, a power management chip 908, an input/output (I/O) subsystem 909, other input/control devices 910, a touch screen 912, other input/control devices 910, and an external port 904, which communicate with each other via one or more communication buses or signal lines 907.
It should be understood that, the system 900 illustrated is just exemplary and the system 900 can have more or fewer components than those illustrated in FIG. 9. For example, two or more components can be combined, or different component configurations can be adopted in the system. The various components illustrated in FIG. 9 can be implemented in hardware, software, or a combination thereof including one or more signal processing and/or application specific integrated circuits.
The following describes a mobile phone as an example of the system for creating real-time video.
The memory 901 is accessible to the CPU 902, the peripheral interface 903, and so on. The memory 901 can include a high-speed random access memory and can further include a non-transitory memory such as one or more magnetic disk storage devices, flash memory devices, or other transitory solid-state memory devices.
The peripheral interface 903 is configured to connect the input and output peripherals of the device to the CPU 902 and the memory 901.
The I/O subsystem 909 is configured to connect the input and the output peripherals such as the touch screen 912 and other input/control devices 910 to the peripheral interface 903. The I/O subsystem 909 can include a display controller 9091 and one or more input controllers 9092 configured to control other input/control devices 910. The one or more input controllers 9092 are configured to receive electrical signals from or send electrical signals to other input/control devices 910, where other input/control devices 910 can include a physical button (apress button, a rocker button, etc. ) , a dial, a slide switch, a joystick, or a click wheel. It should be noted that the input controller 9092 can be coupled with any of a keyboard, an infrared port, a universal serial bus (USB) interface, and a pointing apparatus such as a mouse.
The touch screen 912 functions as an input interface and an output interface between a system and a user, and is configured to display a visual output to the user. The visual output can include graphics, text, icons, videos, and the like.
The display controller 9091 in the I/O subsystem 909 is configured to receive an electrical signal from or send an electrical signal to the touch screen 912. The touch screen 912 is configured to detect contact on the touch screen. The display controller 9091 is configured to convert the contact detected into an interaction with a user interface object displayed on the touch screen 912, that is, to realize human-computer interaction. The user interface object displayed on the touch screen 912 can be an icon of a running game, an icon indicating connection to corresponding networks, and the like. It should be noted that, the device can also include a light mouse, which is a touch-sensitive surface that does not display a visual output, or  can be an extension of a touch-sensitive surface formed by the touch screen.
The RF circuit 905 is configured to establish communication between a mobile phone and a wireless network (i.e. network side) , to transmit and receive data between the mobile phone and the wireless network, such as transmitting and receive short messages, emails, and the like. The RF circuit 905 is configured to receive and transmit RF signals (also known as electromagnetic signals) , to convert an electrical signal into an electromagnetic signal or convert an electromagnetic signal into an electrical signal, and to communicate with a communication network and other devices through electromagnetic signals. The RF circuit 905 can include known circuits for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) , and so on.
The audio circuit 906 is configured to receive audio data from the peripheral interface 903, to convert the audio data into an electrical signal, and to transmit the electrical signal to the speaker 911.
The speaker 911 is configured to restore the voice signal received by the mobile phone from the wireless network via the RF circuit 905 to sound and to play the sound to the user.
The power management chip 908 is configured for power supply and power management of the hardware connected to the CPU 902, the I/O subsystem 909, and the peripheral interfaces 903.
The system for creating real-time video and the storage medium can execute the method for creating real-time video of any of the above embodiments and have corresponding functional modules and advantages of executing the method. For technical details not described herein, reference can be made to the description of the method for creating real-time video.
The above are only some embodiments of the present disclosure and the technical principles applied thereto. Those skilled in the art will appreciate that the present disclosure is not limited to the embodiments described herein, and that various changes, modifications, and substitutions can be made by those skilled in the art without departing from the scope of the disclosure. Therefore, while the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various equivalent arrangements included within the scope of the disclosure. The scope of the disclosure is determined by the scope of the appended claims.
The units, interfaces, modules, and/or components depicted in the figures and described herein may be present in the form of a hardware, a software and a combination thereof. Connection/sshown between these units/components/modules/interfaces in the exemplary system architecture may interact with each other through various wired links, wireless links, logical links and/or physical links. Further, the units/components/modules/interfaces may be connected in other possible ways.
While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated by those skilled in the art that many changes can be made to the embodiments disclosed herein without departing from the principles and scope of the present invention.

Claims (15)

  1. A method for creating a real-time video, the method comprising:
    receiving, at a camera unit [102] , a camera preview frame to record a video;
    dividing, by a processing unit [104] , the camera preview frame into at least two blocks;
    performing, by the processing unit [104] , an image analysis on the camera preview frame to identify at least one object and an area of interest;
    determining, by the processing unit [104] , at least one block of the camera preview frame based on the at least one object and the area of interest; and
    generating, by the processing unit [104] , a real-time video of the at least one block of the camera preview frame, by encoding the at least one block of the camera preview frame.
  2. The method as claimed in claim 1, wherein dividing the camera preview frame into the at least two blocks comprises:
    determining a minimum recording resolution; and
    dividing the camera preview frame into the at least two blocks based on the minimum recording resolution.
  3. The method as claimed in claim 1 or 2, wherein each of the devided at least two blocks has a same aspect ratio as the camera preview frame, and encoding the at least one block of the camera preview frame comprises:
    encoding the at least one block of the camera preview frame in such a manner that the encoded at least one block of the camera preview frame still has the same aspect ratio as the camera preview frame.
  4. The method as claimed in claim 2 or 3, further comprising the following before encoding the at least one block of the camera preview frame:
    compressing, by the processing unit [104] , a resolution of the determined at least one block of the camera preview frame to the minimum recording resolution.
  5. The method as claimed in any of claims 1 to 4, further comprising the following before dividing the camera preview frame into the at least two blocks:
    determinging and adjusting, by the processing unit [104] , a camera preview resolution of the camera preview frame.
  6. The method as claimed in any of claims 1 to 5, further comprising displaying, at a display unit [106] , the encoded video with the identified at least one object and the area of interest.
  7. The method as claimed in any of claims 1 to 6, wherein performing the image analysis on the camera preview frame further to identify the at least one object and the area of interest comprises at least one of:
    receiving at least one user interaction to identify the at least one object and the area of interest; and
    automatically identifying the at least one object and the area of interest.
  8. The method as claimed in claim 7, wherein receiving at least one user interaction to identify the at least one object comprises:
    receiving at least one user interaction to determining the at least one object; and
    determinging an area around the at least one object as the area of interest.
  9. The method as claimed in claim 7 or 8, wherein the priority of user interaction is higher than automatic identification.
  10. The method as claimed in any of claims 7 to 9, wherein automatically identifying area of interest and object comprises:
    identifying the at least one object and the area of interest dynamically according to a scene corresponding to the camera preview frame or a voice accosiated with the camera preview frame.
  11. A system for creating real-time video, the system comprising:
    a camera unit [102] configured to receive a camera preview frame, wherein the camera preview frame is associated with an aspect ratio; and
    a processing unit [104] , connected to the camera unit [102] , the processing unit configured to: divide the camera preview frame into at least two blocks,
    perform an image analysis on the camera preview frame to identify at least one object and an area of interest,
    modify the camera preview frame to determine at least one block of camera preview frame, and
    encode the at least one block of camera preview frame in the aspect ratio to generate encoded video of the at least one block of camera preview frame.
  12. The system as claimed in claim 11, wherein the system further comprises, a display unit [106] , connected to the camera unit [102] and processing unit [104] , the display unit [106] configured to display the encoded video with the at least one object and the area of interest.
  13. The system as claimed in claim 11 or 12, wherein the processing unit [104] is further configured to determine and adjust, a preview resolution of the camera preview frame.
  14. The system as claimed in any of claims 11 to 13, wherein the processing unit [104] is further configured to compress the resolution of the determined at least one block of camera preview frame to a minimum recording resolution.
  15. A non-transitory computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to carry out the method of any of claims 1 to 10.
PCT/CN2020/115365 2019-10-18 2020-09-15 A system and method for creating real-time video WO2021073336A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201941042301 2019-10-18
IN201941042301 2019-10-18

Publications (1)

Publication Number Publication Date
WO2021073336A1 true WO2021073336A1 (en) 2021-04-22

Family

ID=75537703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/115365 WO2021073336A1 (en) 2019-10-18 2020-09-15 A system and method for creating real-time video

Country Status (1)

Country Link
WO (1) WO2021073336A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284085A (en) * 2013-07-08 2015-01-14 Lg电子株式会社 Electronic device and method of operating the same
CN105578275A (en) * 2015-12-16 2016-05-11 小米科技有限责任公司 Video display method and apparatus
CN106664443A (en) * 2014-06-27 2017-05-10 皇家Kpn公司 Determining a region of interest on the basis of a HEVC-tiled video stream
CN107810629A (en) * 2015-01-18 2018-03-16 三星电子株式会社 Image processing apparatus and image processing method
EP3389263A2 (en) * 2017-04-10 2018-10-17 INTEL Corporation Technology to encode 360 degree video content
CN109194923A (en) * 2018-10-18 2019-01-11 眸芯科技(上海)有限公司 Video image processing apparatus, system and method based on non-uniformed resolution ratio

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284085A (en) * 2013-07-08 2015-01-14 Lg电子株式会社 Electronic device and method of operating the same
CN106664443A (en) * 2014-06-27 2017-05-10 皇家Kpn公司 Determining a region of interest on the basis of a HEVC-tiled video stream
CN107810629A (en) * 2015-01-18 2018-03-16 三星电子株式会社 Image processing apparatus and image processing method
CN105578275A (en) * 2015-12-16 2016-05-11 小米科技有限责任公司 Video display method and apparatus
EP3389263A2 (en) * 2017-04-10 2018-10-17 INTEL Corporation Technology to encode 360 degree video content
CN109194923A (en) * 2018-10-18 2019-01-11 眸芯科技(上海)有限公司 Video image processing apparatus, system and method based on non-uniformed resolution ratio

Similar Documents

Publication Publication Date Title
CN110213616B (en) Video providing method, video obtaining method, video providing device, video obtaining device and video providing equipment
WO2020082902A1 (en) Sound effect processing method for video, and related products
CN113676592B (en) Recording method, recording device, electronic equipment and computer readable medium
CN109040524B (en) Artifact eliminating method and device, storage medium and terminal
CN112770059B (en) Photographing method and device and electronic equipment
CN113014983B (en) Video playing method and device, storage medium and electronic equipment
KR20130056529A (en) Apparatus and method for providing augmented reality service in portable terminal
CN108776822B (en) Target area detection method, device, terminal and storage medium
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN110677649B (en) Artifact removing method based on machine learning, artifact removing model training method and device
US11211097B2 (en) Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus
KR20140092517A (en) Compressing Method of image data for camera and Electronic Device supporting the same
CN114040230A (en) Video code rate determining method and device, electronic equipment and storage medium thereof
CN113490010A (en) Interaction method, device and equipment based on live video and storage medium
CN110266955B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN111416996A (en) Multimedia file detection method, multimedia file playing device, multimedia file equipment and storage medium
CN113596240B (en) Recording method, recording device, electronic equipment and computer readable medium
CN113965665A (en) Method and equipment for determining virtual live broadcast image
CN112866584B (en) Video synthesis method, device, terminal and storage medium
CN108960213A (en) Method for tracking target, device, storage medium and terminal
WO2021073336A1 (en) A system and method for creating real-time video
CN108882004B (en) Video recording method, device, equipment and storage medium
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
CN113395451B (en) Video shooting method and device, electronic equipment and storage medium
CN112203020B (en) Method, device and system for configuring camera configuration parameters of terminal equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20876185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20876185

Country of ref document: EP

Kind code of ref document: A1