WO2014100966A1 - 播放视频的方法、终端和系统 - Google Patents

播放视频的方法、终端和系统 Download PDF

Info

Publication number
WO2014100966A1
WO2014100966A1 PCT/CN2012/087391 CN2012087391W WO2014100966A1 WO 2014100966 A1 WO2014100966 A1 WO 2014100966A1 CN 2012087391 W CN2012087391 W CN 2012087391W WO 2014100966 A1 WO2014100966 A1 WO 2014100966A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
playback
video
picture
region
Prior art date
Application number
PCT/CN2012/087391
Other languages
English (en)
French (fr)
Inventor
贺荣徽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201280005842.5A priority Critical patent/CN104081760B/zh
Priority to JP2015549913A priority patent/JP2016506167A/ja
Priority to PCT/CN2012/087391 priority patent/WO2014100966A1/zh
Priority to CN201711339410.9A priority patent/CN108401134A/zh
Priority to KR1020157017260A priority patent/KR101718373B1/ko
Priority to EP12878638.1A priority patent/EP2768216A4/en
Priority to US14/108,180 priority patent/US9064393B2/en
Publication of WO2014100966A1 publication Critical patent/WO2014100966A1/zh

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19665Details related to the storage of video surveillance data
    • G08B13/19669Event triggers storage or change of storage policy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/87Regeneration of colour television signals
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19654Details concerning communication with a camera
    • G08B13/19656Network used to communicate with a camera, e.g. WAN, LAN, Internet
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19665Details related to the storage of video surveillance data
    • G08B13/19671Addition of non-video data, i.e. metadata, to video stream
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Definitions

  • the present invention relates to the field of video surveillance, and in particular, to a method, a terminal and a system for playing video in the field of video surveillance. Background technique
  • the existing video surveillance client generally plays video images of multiple cameras at the same time, and as the resolution of the video image increases, the total resolution of the video images of multiple cameras often exceeds the resolution range of the client monitor. Taking a 22-inch display as an example, the maximum resolution can only support up to 1920*1080, which means that only one 1080p picture can be played. If multiple 1080p pictures are played simultaneously on one monitor, only the picture can be reduced;
  • a typical video surveillance client, in addition to the playback window has a number of auxiliary function panels such as a title bar, a camera list, and a pan/tilt control panel, which further reduces the space that the video screen can display. Thus, the playback window can play a much smaller picture than the original picture.
  • an event occurs on the video screen (such as an intelligent analysis trigger event, etc.)
  • the screen is reduced during playback, the area of the event area will be even smaller, which will be inconvenient for the user to view. If the personnel monitors the picture, the reduced picture makes it difficult for the observer to pay attention to the details, resulting in missing key information.
  • Embodiments of the present invention provide a method, a terminal, and a system for playing a video, which can improve a user experience.
  • an embodiment of the present invention provides a method for playing a video, the method comprising: dividing an original play picture into at least two regions of interest; determining a first sense of the at least two regions of interest having a trigger event a region of interest; acquiring decoded data of the first video frame displayed in the first region of interest; rendering the decoded data of the first video frame to a specified play window for playing.
  • the method further includes: determining a correspondence between each of the at least two regions of interest and a specified play window;
  • the decoding of the decoded data to the specified play window for playing includes: rendering the decoded data of the first video frame to a specified play window corresponding to the first region of interest for playing according to the correspondence.
  • the determining the first region of interest in the at least two regions of interest with a trigger event including: determining a trigger operation of the user on the region of interest in the original play screen, the trigger operation includes: clicking an operation, a double-click operation, or a selected operation of the region of interest; determining the region of interest having the trigger operation as the The first region of interest.
  • the determining the first region of interest in the at least two regions of interest with a trigger event includes: acquiring coordinate metadata of a trigger event occurrence point in the original play screen; and determining, according to the coordinate metadata, the region of interest to which the trigger event occurrence point belongs as the first region of interest.
  • the acquiring is in the first Decoding data of the first video picture displayed by the region of interest includes: acquiring decoded data of the original play picture; and determining decoded data of the first video picture according to the decoded data of the original play picture.
  • the decoding of the decoded data of the first video frame to the specified play window for playing includes: rendering the decoded data of the first video frame And zooming in to the designated play window, wherein the specified play window is larger than the first region of interest.
  • the decoded data of the picture is rendered to the specified play window for playing, including: popping up the independent play window; and the decoded data of the first video picture is rendered to the independent play window for playing.
  • an embodiment of the present invention provides a terminal for playing a video, where the terminal includes: a dividing module, configured to divide an original playing screen into at least two regions of interest; and a first determining module, configured to determine the dividing module a first region of interest having a triggering event in the at least two regions of interest; an obtaining module, configured to acquire decoded data of the first video frame displayed in the first region of interest determined by the first determining module; The playing module is configured to render the decoded data of the first video frame acquired by the acquiring module to a specified playing window for playing.
  • the terminal further includes: a second determining module, configured to determine a correspondence between each of the at least two regions of interest and the specified play window;
  • the playing module is further configured to: according to the correspondence determined by the second determining module, the decoded data of the first video frame acquired by the acquiring module is rendered to a specified playing window corresponding to the first region of interest for playing .
  • the first determining module includes: a first determining unit, configured to determine a user a triggering operation of the region of interest in the playback screen, the triggering operation comprising: a click operation, a double-click operation, or a selected operation of the region of interest; a second determining unit, configured to: use the triggering operation determined by the first determining unit The region of interest is determined as the first region of interest.
  • the first determining module includes: a first acquiring unit, configured to acquire the original playing screen The coordinate data of the triggering event occurrence point; the third determining unit is configured to determine, according to the coordinate metadata acquired by the first acquiring unit, the region of interest to which the trigger event occurrence point belongs as the first region of interest .
  • the acquiring module package The second obtaining unit is configured to obtain the decoded data of the original play picture, and the third determining unit is configured to determine the decoded data of the first video picture according to the decoded data of the original play picture acquired by the second acquiring unit. .
  • the playing module is further used to : rendering the decoded data of the first video frame to the designated play window for performing the enlarged play, wherein the designated play window is larger than the first region of interest.
  • the playing module includes: popping up The unit is configured to pop up an independent play window, and the playing unit is configured to render the decoded data of the first video frame to the independent play window popped up by the pop-up unit for playing.
  • an embodiment of the present invention provides a system for playing a video, the system comprising: a terminal according to the second aspect of the present invention; a video capture system, configured to collect a video image, and generate a media by encoding the video image a server, configured to obtain the media stream generated by the video collection system, and provide the media stream to the terminal; and a storage device, configured to store the media stream obtained by the server; wherein the terminal includes: a partitioning module, The first playback module is configured to determine a first region of interest having a trigger event in the at least two regions of interest divided by the partitioning module; Obtaining, by the first determining module, the decoded data of the first video frame displayed by the first region of interest; the playing module, configured to: render the decoded data of the first video image acquired by the acquiring module to a specified playing window Play.
  • a method, a terminal, and a system for playing a video by dividing an original play picture into a plurality of regions of interest, and separately displaying a picture of the region of interest having a trigger event, on the one hand
  • the user can observe the picture details of the clearer region of interest, and on the other hand enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • FIG. 1 is a schematic structural diagram of an exemplary application scenario of an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for playing a video according to an embodiment of the present invention.
  • FIG. 3 is another schematic flowchart of a method for playing a video according to an embodiment of the present invention.
  • 4 is a schematic flow chart of a method of dividing a region of interest according to an embodiment of the present invention.
  • FIG. 5 is another schematic flowchart of a method of dividing a region of interest according to an embodiment of the present invention.
  • FIG. 6 is a schematic flow diagram of a method of determining a region of interest with a triggering event, in accordance with an embodiment of the present invention.
  • FIG. 7 is another schematic flow diagram of a method of determining a region of interest with a triggering event in accordance with an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of a method of acquiring decoded data of a region of interest according to an embodiment of the present invention.
  • FIG. 9 is a schematic flow diagram of a method of playing a picture of a region of interest, in accordance with an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of a method for playing a video according to another embodiment of the present invention.
  • 11A and 11B are another schematic flowchart of a method of playing a video according to another embodiment of the present invention.
  • 12A and 12B are schematic diagrams of playing a region of interest in accordance with an embodiment of the present invention.
  • FIG. 13 is a schematic block diagram of a terminal according to an embodiment of the present invention.
  • FIG. 14 is another schematic block diagram of a terminal according to an embodiment of the present invention.
  • Figure 15 is a schematic block diagram of a first determining module in accordance with an embodiment of the present invention.
  • FIG. 16 is another schematic block diagram of a first determining module according to an embodiment of the present invention.
  • 17 is a schematic block diagram of an acquisition module in accordance with an embodiment of the present invention.
  • FIG. 18 is a schematic block diagram of a playback module in accordance with an embodiment of the present invention.
  • 19 is a schematic block diagram of a system in accordance with an embodiment of the present invention.
  • FIG. 20 is a schematic block diagram of a terminal according to another embodiment of the present invention. detailed description
  • FIG. 1 shows a schematic architectural diagram of an exemplary application scenario of an embodiment of the present invention.
  • a video surveillance system suitable for applying the embodiments of the present invention may include: a video capture device, a central server, a storage device, and a terminal having a client, where the video capture device may be used to collect video images and may pass The video image is encoded to generate a media stream for network transmission.
  • the video capture device may include: a network camera, an analog camera, an encoder, a digital video recorder (Digital Video Recorder, a cartridge called "DVR"), etc.;
  • DVR Digital Video Recorder
  • the client After connecting to the central server, the client can request the video stream, and can decode and display it, and can provide users with viewing live video images.
  • the central server may include a management server and a media server, wherein the media server may be responsible for receiving the media stream, and recording the media stream data on the storage device, and forwarding the media stream to the client for on-demand playback; the management server may be responsible for the user's login. , authentication, service scheduling and other functions; the central server can also receive access from multiple clients, and manage the connection between the video surveillance systems through the network.
  • the storage device may be, for example, a disk array, and the disk array may be responsible for storing video data, and may adopt a network attached storage (Network Attached Storage, referred to as "NAS"), and a storage area network (Storage Area Network). SAN”) or the server itself for storage.
  • NAS Network Attached Storage
  • SAN Storage Area Network
  • FIG. 1 is only one embodiment suitable for applying the method of the present invention, and is not intended to impose any limitation on the use and function of the present invention, and the present invention should not be construed as being shown with the video. Any one or combination of the components of the monitoring system has any relevant requirements. However, in order to explain the present invention more clearly, the following describes the application scenario of the video surveillance system as an example, but the present invention is not limited thereto.
  • the video data transmission technical solution in the embodiment of the present invention may adopt various communication networks or communication systems, for example: Global System of Mobile communication ("GSM”) system, code division Code Division Multiple Access (“CDMA”) system, Wideband Code Division Multiple Access (WCDMA) system, General Packet Radio Service (General Packet Radio Service) "GPRS”), Long Term Evolution (LTE) system, LTE Frequency Division Duplex (“FDD”) system, LTE Time Division Duplex (Time Division Duplex) Called “TDD”), Universal Mobile
  • GSM Global System of Mobile communication
  • CDMA code division Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GPRS General Packet Radio Service
  • LTE Long Term Evolution
  • FDD Frequency Division Duplex
  • Time Division Duplex Time Division Duplex
  • Time Division Duplex Time Division Duplex
  • the method 100 includes:
  • the device playing the video may first divide the original play picture into multiple senses.
  • the method for playing a video in the embodiment of the present invention by dividing the original play picture into a plurality of regions of interest, and separately displaying the picture of the region of interest having the trigger event, on the one hand enables the user to observe a clearer
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • the video may include a video file, and may also include a real-time video stream.
  • the embodiment of the present invention is only described by playing a real-time video stream, but the embodiment of the present invention is not limited thereto. .
  • the method 100 further includes:
  • the decoding of the decoded data of the first video picture to the specified play window for playing includes:
  • the decoded data of the first video picture is rendered to a designated play window corresponding to the first region of interest for playing. That is, for each region of interest, one or more play windows may be associated for playing a picture of the region of interest when the region of interest has a trigger event.
  • the specified window may be a maximum play window of the display device, or may be part of the maximum play window; the specified window may be a part of the currently existing play window or the existing play window, or may be re-popped or re-opened.
  • the generated play window, the embodiment of the present invention is not limited thereto.
  • dividing the original play picture into at least two regions of interest includes: dividing the original play picture into the at least two regions of interest by using an equally divided manner or a free split manner.
  • multiple regions of interest may be pre-divided at the client, and the sizes of the regions of interest may be equal or unequal, and each region of interest may be set as an irregular region;
  • the correspondence between different regions of interest and the play window may also be determined.
  • the division of the area of interest can be manually split by the user, or automatically configured by the client software, and saved by the client.
  • the division of the picture can be divided into equal divisions or freely divided.
  • the specific configuration process is shown in Figures 4 and 5, for example.
  • the method of dividing the region of interest in an equally divided manner includes:
  • the method for dividing the region of interest by means of free segmentation may include, for example:
  • the original play screen for dividing the region of interest may be all the play screens in the maximum play window of the display device, or may be in multiple pictures simultaneously played in the largest part of the window.
  • One or more pictures, embodiments of the present invention are not limited thereto.
  • the device for playing a video determines a first region of interest having a trigger event in the at least two regions of interest to display the picture in the first region of interest in a separate play window, thereby improving picture detail The display effect.
  • the event may be manually triggered by the user and the region of interest may be determined, or the trigger generated automatically by the event may be detected, and the region of interest may be determined, which will be described below in conjunction with FIGS. 6 and 7.
  • the determining, by the first region of interest, the triggering event in the at least two regions of interest includes:
  • a trigger operation of the user on the region of interest in the original play screen where the trigger operation includes: a click operation, a double-click operation, or a selected operation of the region of interest;
  • the user interface may be operated, for example, the region of interest in the original playback screen is triggered to play the scene of the region of interest in the event to the prior The specified playback window, or through the pop-up independent play window for display; when multiple regions of interest have events, multiple windows can be triggered for display.
  • the triggering operation is, for example, a click operation, a double-click operation, or a selected operation on the region of interest, and the like, and the embodiment of the present invention is not limited thereto.
  • FIG. 7 illustrates another schematic flow diagram of a method of determining a region of interest with a triggering event in accordance with an embodiment of the present invention.
  • the determining, by the first region of interest, the triggering event in the at least two regions of interest includes:
  • the user can pre-configure an area that needs to automatically detect an event, and configure event detection rules, such as motion detection or intelligent analysis detection.
  • event detection rules such as motion detection or intelligent analysis detection.
  • the client software may determine the corresponding pre-configured region of interest according to the coordinate metadata of the triggering event occurrence point, thereby being able to play the corresponding screen to the previously specified play window, or by popping up the independent play window. Display; multiple windows can be triggered when multiple regions of interest have events The port is displayed.
  • the triggering event may cover multiple regions of interest.
  • the plurality of regions of interest may be determined as the first region of interest having a triggering event, which is not Limited to this.
  • the device for playing video may determine whether there is a trigger event in the region of interest by means of motion detection or intelligent analysis detection, etc.
  • the central server may also perform detection to determine whether there is a trigger event in the region of interest. And detecting a trigger event, the coordinate metadata of the trigger event occurrence point may be fed back to the device for playing the video, so that the device playing the video may determine the first region of interest with the trigger event according to the coordinate metadata.
  • the embodiment of the invention is not limited thereto.
  • the device playing the video acquires the decoded data of the first video picture displayed in the first region of interest, so as to play the first video picture in the specified play window.
  • the acquiring the decoded data of the first video frame displayed in the first region of interest includes:
  • S132 Determine decoding data of the first video picture according to the decoded data of the original play picture.
  • the device that plays the video receives an event manually triggered by the user, including a mouse click, a double click, a click of a toolbar button, or a shortcut key trigger, or the device determines, according to the coordinate metadata, the first region of interest with the trigger event. Afterwards, the device may intercept the data content belonging to the region of interest from the decoded YUV data of the original play window, and may play the portion of the content in a pre-defined play pane according to the pre-configured correspondence (or Pop-up independent play window playback), since multiple play windows use the same YUV data source, the device does not need to elicit or add additional multiple video streams.
  • an event manually triggered by the user including a mouse click, a double click, a click of a toolbar button, or a shortcut key trigger
  • the device determines, according to the coordinate metadata, the first region of interest with the trigger event. Afterwards, the device may intercept the data content belonging to the region of interest from the decoded YUV data of the original play
  • the resolution of the original playback screen For example, set the resolution of the original playback screen to Width X Height, the starting point of the region of interest is StartX, the ordinate is StartY, the ending point is the end coordinate, the ordinate is EndY, and the YUV data of the original playback screen is In the array Org[Width Height], the YUV data of the region of interest is in Dst[ROIWidth X ROIHeight], where n is any point in the region of interest, and the YUV data of the region of interest can be determined according to the following equation:
  • ROIWidth EndX-StartX
  • the device playing the video renders the decoded data of the first video picture to a designated play window for playing.
  • the device for playing a video may play the first video frame in a pop-up window, or display the first video frame in a new play window, and display the first video frame in the original play window, and may A video frame is digitally scaled to fit the size of the playback window.
  • the specified window may be a maximum play window of the display device, or may be part of the maximum play window; the specified window may be a currently existing play window or a part of the existing play window.
  • the playback window may be re-popped or re-created; the specified window may be one window or more than one window, and the embodiment of the present invention is not limited thereto.
  • the decoding of the decoded data of the first video frame to the specified play window for playing includes:
  • the decoded data of the first video picture is rendered to the designated play window for enlarged play, wherein the designated play window is larger than the first region of interest.
  • the decoding of the decoded data of the first video picture to the specified play window for playing includes:
  • the specified window is a pop-up independent play window
  • the pop-up independent play window may be larger than the first region of interest to perform enlarged play on the first video frame, but the present invention is implemented.
  • the example is not limited to this.
  • the independent play window can also be smaller than or equal to the first region of interest.
  • B corresponding to A means that B is associated with A, and can be determined according to A, but it should also be understood that determining B according to A does not mean that it is determined only by A.
  • B can also be determined according to A and / or other information ⁇
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the method for playing a video in the embodiment of the present invention divides the original playback screen into a plurality of regions of interest, and displays the screen of the region of interest having the trigger event separately.
  • the surface enables the user to observe the picture details of the region of interest more clearly and more intuitively, and on the other hand enables the user to simultaneously track the picture details of the plurality of regions of interest, thereby significantly improving the user experience; further, the embodiment of the present invention uses The original decoded data plays a picture of the region of interest without adding additional video streams.
  • the method 200 for playing a video may be performed by a device that plays a video, such as a terminal or a client.
  • the method 200 may include:
  • GUI graphical User Interface
  • Figure 11A shows a schematic flow diagram of a method 300 of manually triggering a region of interest, which may be performed by a device that plays video, such as a terminal or client, in accordance with an embodiment of the present invention.
  • the method 300 can include:
  • the region of interest YUV data is rendered to the specified play window for playing; for example, as shown in FIG. 12A, the entire play window includes an original play picture window, and three and the original play picture window. a specified play window of the same size, wherein the original play picture is divided into 16 regions of interest, and the picture of the region of interest having the manually triggered event is enlarged and played in one of the designated play windows;
  • Figure 11B shows a schematic flow diagram of a method 400 of automatically triggering an area of interest for an event in accordance with an embodiment of the present invention.
  • the method 400 can include:
  • the window is played; for example, as shown in FIG. 12B, the entire play window includes an original play picture window, and three designated play windows having the same size as the original play picture window, wherein the original play picture is divided into 16 regions of interest. , the picture of the region of interest with the trigger event is enlarged and played in one of the specified play windows;
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the method for playing a video in the embodiment of the present invention by dividing the original play picture into a plurality of regions of interest, and separately displaying the picture of the region of interest having the trigger event, on the one hand enables the user to observe a clearer
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • FIG. 1 to FIG. 12B The method of playing a video according to an embodiment of the present invention is described in detail above with reference to FIG. 1 to FIG. 12B.
  • a terminal and system for playing a video according to an embodiment of the present invention will be described in detail below with reference to FIGS. 13 to 20.
  • FIG. 13 shows a schematic block diagram of a terminal 500 in accordance with an embodiment of the present invention. As shown in FIG. 13, the terminal 500 includes:
  • a dividing module 510 configured to divide the original playing picture into at least two regions of interest; the first determining module 520 is configured to determine, in the at least two regions of interest divided by the dividing module 510, the first interest having a trigger event Area
  • the obtaining module 530 is configured to acquire decoded data of the first video picture displayed in the first region of interest determined by the first determining module 520;
  • the playing module 540 is configured to render the decoded data of the first video frame acquired by the obtaining module 530 to a specified playing window for playing.
  • the terminal playing the video according to the embodiment of the present invention divides the original play picture into a plurality of regions of interest, and displays the picture of the region of interest having the trigger event separately, thereby enabling the user to observe clearer on the one hand.
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • the terminal that plays the video can play the video file
  • the real-time video stream can be played.
  • the embodiment of the present invention is only described by using the terminal to play the real-time video stream, but the embodiment of the present invention is not limited thereto.
  • the terminal 500 further includes: a second determining module 550, configured to determine each of the at least two regions of interest and the specified play window Correspondence relationship;
  • the playing module 540 is further configured to: according to the correspondence determined by the second determining module 550, the decoded data of the first video frame acquired by the obtaining module 530 is rendered to a designation corresponding to the first region of interest. Play the window to play.
  • the dividing module 510 is further configured to: divide the original play picture into the at least two regions of interest by using an equally divided manner or a free split manner.
  • the first determining module 520 includes: a first determining unit 521, configured to determine a trigger operation of the user on the region of interest in the original play screen, where The triggering operation includes: a click operation, a double-click operation, or a selected operation of the region of interest; a second determining unit 522, configured to determine the region of interest having the triggering operation determined by the first determining unit 521 as the first interested region.
  • the first determining module 520 includes: a first acquiring unit 523, configured to acquire coordinate metadata of a trigger event occurrence point in the original playing screen;
  • the third determining unit 524 is configured to determine, according to the coordinate metadata acquired by the first acquiring unit 523, the region of interest to which the trigger event occurrence point belongs as the first region of interest.
  • the acquiring module 530 includes: a second acquiring unit 531, configured to acquire decoded data of the original playing screen;
  • the third determining unit 532 is configured to determine, according to the decoded data of the original playing screen acquired by the second acquiring unit 531, the decoded data of the first video picture.
  • the playing module is further configured to: render the decoded data of the first video frame to the designated play window for performing the enlarged play, wherein the specified play window is larger than the first region of interest.
  • the play module 540 includes: an ejecting unit 541, configured to pop up an independent play window;
  • the playing unit 542 is configured to render the decoded data of the first video picture to the independent playing window popped up by the pop-up unit for playing.
  • the terminal 500 for playing video may correspond to the device for playing video in the embodiment of the present invention, and the above and other operations and/or functions of the respective modules in the terminal 500 are respectively implemented to implement FIG. 1 to The corresponding processes of the respective methods 100 to 400 in FIG. 12B are not described herein.
  • the terminal playing the video according to the embodiment of the present invention divides the original play picture into a plurality of regions of interest, and displays the picture of the region of interest having the trigger event separately, thereby enabling the user to observe clearer on the one hand.
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • Figure 19 shows a schematic block diagram of a system 600 in accordance with an embodiment of the present invention. As shown in Figure 19, the system 600 includes:
  • Terminal 610 according to an embodiment of the present invention.
  • a video capture system 620 configured to collect a video image, and generate a media stream by encoding the video image
  • the server 630 is configured to obtain the media stream generated by the video collection system, and provide the media stream to the terminal 620;
  • the storage device 640 is configured to store the media stream obtained by the server 630.
  • the system 600 for playing video may include a terminal 610 corresponding to the terminal 500 for playing video in the embodiment of the present invention, and the above-mentioned and other operations and/or functions of the respective modules in the terminal 610 are respectively
  • the corresponding processes of the respective methods 100 to 400 in FIGS. 1 to 12B for the sake of cleaning, no further details are provided herein.
  • the system for playing video divides the original play picture into a plurality of regions of interest, and displays the screen of the region of interest having the trigger event separately, thereby enabling the user to observe clearer on the one hand.
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • the embodiment of the present invention further provides a terminal for playing video.
  • the terminal 700 includes: a processor 710, a memory 720, and a bus system 730.
  • the processor 710 and the memory 720 are connected by a bus system 730.
  • the memory 720 is configured to store instructions
  • the processor 710 is configured to execute the instructions stored in the memory 720, wherein the processor 710 is configured to: divide the original play picture into at least two regions of interest; determine the at least two senses a first region of interest having a triggering event in the region of interest; acquiring decoded data of the first video frame displayed in the first region of interest; rendering the decoded data of the first video frame to a specified playing window for playing.
  • the terminal playing the video according to the embodiment of the present invention divides the original play picture into a plurality of regions of interest, and displays the picture of the region of interest having the trigger event separately, thereby enabling the user to observe clearer on the one hand.
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • the processor 710 may be a central processing unit (Central)
  • the processing unit may also be other general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs), or other programmable Logic devices, discrete gates or transistor logic devices, discrete hardware components, and more.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 720 can include read only memory and random access memory and provides instructions and data to the processor 710. A portion of memory 720 may also include non-volatile random access memory. For example, the memory 720 can also store information of the device type.
  • the bus system 730 can include, in addition to the data bus, a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are labeled as bus system 730 in the figure.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 710 or an instruction in the form of software.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
  • the software modules can be located in random memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, etc., which are well established in the art.
  • the storage medium is located in the memory 720.
  • the processor 710 reads the information in the memory 720 and combines the hardware to perform the steps of the above method. To avoid repetition, it will not be described in detail here.
  • the processor 710 is further configured to: determine a correspondence between each of the at least two regions of interest and a specified play window; and the processor 710 decodes the first video frame.
  • the data is rendered to the specified play window for playing, including: according to the correspondence, the decoded data of the first video frame is rendered to a designated play window corresponding to the first region of interest for playing.
  • the dividing, by the processor 710, the original play picture into the at least two regions of interest includes: dividing the original play picture into the at least two by using an equally divided manner or a free split manner. Area of interest.
  • the processor 710 determines that the at least two regions of interest have The first region of interest having a triggering event, comprising: determining a triggering operation by the user on the region of interest in the original playing screen, the triggering operation comprising: clicking an operation, a double-clicking operation, or a selected operation of the region of interest; The region of interest that triggered the operation is determined to be the first region of interest.
  • the determining, by the processor 710, the first region of interest having the trigger event in the at least two regions of interest including: acquiring coordinate metadata of a trigger event occurrence point in the original play screen; The coordinate metadata determines the region of interest to which the trigger event occurrence point belongs as the first region of interest.
  • the acquiring, by the processor 710, the decoded data of the first video frame displayed in the first region of interest includes: acquiring decoded data of the original play frame; and according to the decoded data of the original play frame, Determining the decoded data of the first video picture.
  • the terminal 700 is further configured to: render the decoded data of the first video frame to the designated play window for performing the enlarged play, wherein the designated play window is greater than the first region of interest.
  • the terminal 700 further includes a display 740, where the processor 710 renders the decoded data of the first video picture to the specified play window for playing, including: popping up the independent play window; The decoded data of the first video picture is rendered to the independent play window for playing.
  • the terminal 700 for playing video may correspond to the terminal 500 or the terminal 610 for playing video in the embodiment of the present invention, and the above operations and/or functions of the respective modules in the terminal 700 are respectively The corresponding processes of the respective methods 100 to 400 in FIG. 1 to FIG. 12B are implemented, and are not described herein again.
  • the terminal playing the video according to the embodiment of the present invention divides the original play picture into a plurality of regions of interest, and displays the picture of the region of interest having the trigger event separately, thereby enabling the user to observe clearer on the one hand.
  • the picture detail of the region of interest on the other hand, also enables the user to simultaneously track the picture details of multiple regions of interest, thereby significantly improving the user experience.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, or an electrical, mechanical or other form of connection.
  • the components displayed for the unit may or may not be physical units, ie may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention contributes in essence or to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Library & Information Science (AREA)
  • Studio Circuits (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本发明公开了一种播放视频的方法、装置和系统。该方法包括:将原始播放画面划分为至少两个感兴趣区域;确定该至少两个感兴趣区域中具有触发事件的第一感兴趣区域;获取在该第一感兴趣区域显示的第一视频画面的解码数据;将该第一视频画面的解码数据渲染到指定播放窗口进行播放。本发明实施例的播放视频的方法、装置和系统,通过将原始播放画面划分成多个感兴趣区域,并将具有触发事件的感兴趣区域的画面单独进行显示,一方面使得用户能够观察到更清晰的感兴趣区域的画面,另一方面也使得用户能够同时跟踪多个感兴趣区域的画面,从而能够显著提高用户体验。

Description

播放视频的方法、 终端和系统 技术领域
本发明涉及视频监控领域, 尤其涉及视频监控领域中播放视频的方法、 终端和系统。 背景技术
当前, 高清视频已经成为视频监控领域一个重要的技术趋势, 720p、 1080p分辨率的摄像机在视频监控领域的应用也越来越广泛。 随着分辨率的 不断提升,单个摄像机能够监控的范围也变得越来越大,细节也越来越清晰; 同时, 对视频画面的智能分析技术也逐步得到应用。 随着硬件设备的技术发 展,硬件性能已经能够满足对同一个画面多个感兴趣区域进行智能分析的需 求, 大大节省了人力监控的成本。
现有的视频监控客户端, 一般都会同时播放多个摄像机的视频画面, 而 随着视频画面分辨率的提升, 多个摄像机视频画面的总分辨率往往超过了客 户端监视器的分辨率范围。 以 22寸显示器为例, 最大分辨率一般只能支持 到 1920*1080, 也就是说只能满足一路 1080p画面的播放, 如果在一个监视 器上同时播放多路 1080p画面,只能缩小画面;而且典型的视频监控客户端, 界面上除了播放窗口外, 还有标题栏、 摄像机列表、 云台控制面板等多种辅 助功能面板, 进一步降低了视频画面可显示的空间。 因而, 播放窗口可播放 的画面相对于原始画面要小的多。
特别是当视频画面有事件发生时 (如智能分析触发事件等), 由于播放 时进行了画面缩小显示, 发生事件的区域画面将更加微小, 将给用户查看带 来不便。 如果靠人员肉目艮监控画面, 缩小后的画面使得观察人员难以关注到 细节的变化, 导致漏掉关键信息。
目前多数客户端提供了对画面的框选放大功能, 即鼠标在视频播放画面 上滑动框选, 将框选区域放大显示, 一定程度上提升了感兴趣区域的画面质 量。 但对视频画面的数字缩放会造成部分像素点信息的丟失, 因此影响了画 面质量, 也影响了用户对画面细节部分的观察效果。 另外, 如果使用框选放 大的功能, 一方面需要用户手动操作, 当事件发生的较快时可能来不及操作 导致事件丟失, 另一方面如果画面不同区域发生事件时, 无法将两个区域都 放大显示。 因此, 用户体验较差。 发明内容
本发明实施例提供了一种播放视频的方法、 终端和系统, 能够提高用户 体验。
第一方面, 本发明实施例提供了一种播放视频的方法, 该方法包括: 将 原始播放画面划分为至少两个感兴趣区域; 确定该至少两个感兴趣区域中具 有触发事件的第一感兴趣区域; 获取在该第一感兴趣区域显示的第一视频画 面的解码数据; 将该第一视频画面的解码数据渲染到指定播放窗口进行播 放。
在第一方面的第一种可能的实现方式中, 该方法还包括: 确定该至少两 个感兴趣区域中的每个感兴趣区域与指定播放窗口的对应关系; 该将该第一 视频画面的解码数据渲染到指定播放窗口进行播放,包括:根据该对应关系, 将该第一视频画面的解码数据渲染到与该第一感兴趣区域相应的指定播放 窗口进行播放。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第二 种可能的实现方式中, 该确定该至少两个感兴趣区域中具有触发事件的第一 感兴趣区域,包括:确定用户对该原始播放画面中的感兴趣区域的触发操作, 该触发操作包括: 单击操作、 双击操作或感兴趣区域的选定操作; 将具有触 发操作的感兴趣区域确定为该第一感兴趣区域。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第三 种可能的实现方式中, 该确定该至少两个感兴趣区域中具有触发事件的第一 感兴趣区域,包括:获取该原始播放画面内的触发事件发生点的坐标元数据; 根据该坐标元数据,将该触发事件发生点所属的感兴趣区域确定为该第一感 兴趣区域。
结合第一方面或第一方面的第一种至第三种可能的实现方式中的任一 种可能的实现方式, 在第一方面的第四种可能的实现方式中, 该获取在该第 一感兴趣区域显示的第一视频画面的解码数据, 包括: 获取该原始播放画面 的解码数据; 根据该原始播放画面的解码数据, 确定该第一视频画面的解码 数据。
结合第一方面或第一方面的第一种至第四种可能的实现方式中的任一 种可能的实现方式, 在第一方面的第五种可能的实现方式中, 该将该第一视 频画面的解码数据渲染到指定播放窗口进行播放, 包括: 将该第一视频画面 的解码数据渲染到该指定播放窗口进行放大播放, 其中该指定播放窗口大于 该第一感兴趣区域。
结合第一方面或第一方面的第一种至第五种可能的实现方式中的任一 种可能的实现方式, 在第一方面的第六种可能的实现方式中, 该将该第一视 频画面的解码数据渲染到指定播放窗口进行播放,包括:弹出独立播放窗口; 将该第一视频画面的解码数据渲染到该独立播放窗口进行播放。
第二方面, 本发明实施例提供了一种播放视频的终端, 该终端包括: 划 分模块, 用于将原始播放画面划分为至少两个感兴趣区域; 第一确定模块, 用于确定该划分模块划分的该至少两个感兴趣区域中具有触发事件的第一 感兴趣区域; 获取模块, 用于获取在该第一确定模块确定的该第一感兴趣区 域显示的第一视频画面的解码数据; 播放模块, 用于将该获取模块获取的该 第一视频画面的解码数据渲染到指定播放窗口进行播放。
在第二方面的第一种可能的实现方式中,该终端还包括:第二确定模块, 用于确定该至少两个感兴趣区域中的每个感兴趣区域与指定播放窗口的对 应关系; 其中,该播放模块还用于:根据该第二确定模块确定的该对应关系, 将该获取模块获取的该第一视频画面的解码数据渲染到与该第一感兴趣区 域相应的指定播放窗口进行播放。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二 种可能的实现方式中, 该第一确定模块包括: 第一确定单元, 用于确定用户 对该原始播放画面中的感兴趣区域的触发操作,该触发操作包括:单击操作、 双击操作或感兴趣区域的选定操作; 第二确定单元, 用于将具有该第一确定 单元确定的触发操作的感兴趣区域确定为该第一感兴趣区域。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第三 种可能的实现方式中, 该第一确定模块包括: 第一获取单元, 用于获取该原 始播放画面内的触发事件发生点的坐标元数据; 第三确定单元, 用于根据该 第一获取单元获取的该坐标元数据,将该触发事件发生点所属的感兴趣区域 确定为该第一感兴趣区域。
结合第二方面或第二方面的第一种至第三种可能的实现方式中的任一 种可能的实现方式, 在第二方面的第四种可能的实现方式中, 该获取模块包 括: 第二获取单元, 用于获取该原始播放画面的解码数据; 第三确定单元, 用于根据该第二获取单元获取的该原始播放画面的解码数据,确定该第一视 频画面的解码数据。
结合第二方面或第二方面的第一种至第四种可能的实现方式中的任一 种可能的实现方式, 在第二方面的第五种可能的实现方式中, 该播放模块还 用于: 将该第一视频画面的解码数据渲染到该指定播放窗口进行放大播放, 其中该指定播放窗口大于该第一感兴趣区域。
结合第二方面或第二方面的第一种至第五种可能的实现方式中的任一 种可能的实现方式, 在第二方面的第六种可能的实现方式中, 该播放模块包 括: 弹出单元, 用于弹出独立播放窗口; 播放单元, 用于将该第一视频画面 的解码数据渲染到该弹出单元弹出的该独立播放窗口进行播放。
第三方面, 本发明实施例提供了一种播放视频的系统, 该系统包括: 根 据本发明第二方面的终端; 视频采集系统, 用于采集视频图像, 并通过将该 视频图像编码而生成媒体流; 服务器, 用于获取该视频采集系统生成的该媒 体流, 以及向该终端提供该媒体流; 和存储设备, 用于存储该服务器获取的 该媒体流; 其中该终端包括: 划分模块, 用于将原始播放画面划分为至少两 个感兴趣区域; 第一确定模块, 用于确定该划分模块划分的该至少两个感兴 趣区域中具有触发事件的第一感兴趣区域; 获取模块, 用于获取在该第一确 定模块确定的该第一感兴趣区域显示的第一视频画面的解码数据; 播放模 块, 用于将该获取模块获取的该第一视频画面的解码数据渲染到指定播放窗 口进行播放。
基于上述技术方案, 本发明实施例的播放视频的方法、 终端和系统, 通 过将原始播放画面划分成多个感兴趣区域, 并将具有触发事件的感兴趣区域 的画面单独进行显示, 一方面使得用户能够观察到更清晰的感兴趣区域的画 面细节, 另一方面也使得用户能够同时跟踪多个感兴趣区域的画面细节, 从 而能够显著提高用户体验。 附图说明
为了更清楚地说明本发明实施例的技术方案, 下面将对本发明实施例中 所需要使用的附图作筒单地介绍, 显而易见地, 下面所描述的附图仅仅是本 发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动的 前提下, 还可以根据这些附图获得其他的附图。
图 1是本发明实施例的一种示例性应用场景的示意性构架图。
图 2是根据本发明实施例的播放视频的方法的示意性流程图。
图 3是根据本发明实施例的播放视频的方法的另一示意性流程图。 图 4是根据本发明实施例的划分感兴趣区域的方法的示意性流程图。 图 5 是根据本发明实施例的划分感兴趣区域的方法的另一示意性流程 图。
图 6是根据本发明实施例的确定具有触发事件的感兴趣区域的方法的示 意性流程图。
图 7是根据本发明实施例的确定具有触发事件的感兴趣区域的方法的另 一示意性流程图。
图 8是根据本发明实施例的获取感兴趣区域的解码数据的方法的示意性 流程图。
图 9是根据本发明实施例的播放感兴趣区域的画面的方法的示意性流程 图。
图 10是根据本发明另一实施例的播放视频的方法的示意性流程图。 图 11A和 11B是根据本发明另一实施例的播放视频的方法的另一示意 性流程图。
图 12A和 12B是根据本发明实施例的播放感兴趣区域的示意图。
图 13是根据本发明实施例的终端的示意性框图。
图 14是根据本发明实施例的终端的另一示意性框图。
图 15是根据本发明实施例的第一确定模块的示意性框图。
图 16是根据本发明实施例的第一确定模块的另一示意性框图。
图 17是根据本发明实施例的获取模块的示意性框图。
图 18是根据本发明实施例的播放模块的示意性框图。
图 19是根据本发明实施例的系统的示意性框图。
图 20是根据本发明另一实施例的终端的示意性框图。 具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行 清楚、 完整地描述, 显然, 所描述的实施例是本发明的一部分实施例, 而不 是全部实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创 造性劳动的前提下所获得的所有其他实施例, 都应属于本发明保护的范围。
图 1示出了本发明实施例的一种示例性应用场景的示意性构架图。如图 1所示,适合于应用本发明实施例的视频监控系统可以包括:视频采集设备、 中心服务器、 存储设备和具有客户端的终端, 其中, 视频采集设备可以用于 采集视频图像, 并可以通过将该视频图像编码而生成媒体流以便于网络传 输, 例如视频采集设备可以包括: 网络摄像机、 模拟摄像机、 编码器、 数字 视频录像机(Digital Video Recorder, 筒称为 "DVR" )等设备; 终端的客户 端在连接到中心服务器后, 可以请求视频流, 并可以进行解码和显示, 并可 以供用户查看现场视频图像等。
中心服务器可以包括管理服务器和媒体服务器, 其中媒体服务器可以负 责接收媒体流, 并将媒体流数据录制保存在存储设备上, 并向客户端转发媒 体流以供点播播放; 管理服务器可以负责用户的登录、 鉴权、 业务调度等功 能; 中心服务器还可接收多个客户端的访问, 并管理各视频监控系统之间通 过网络的连接等。 存储设备例如可以为磁盘阵列, 该磁盘阵列可以负责视频 数据的存储, 并可采用网络连接式存储(Network Attached Storage, 筒称为 "NAS" )、 存储域网络(Storage Area Network, 筒称为 "SAN" )或服务器 自身进行存储。
应理解, 图 1所示的视频监控系统只是适合应用本发明方法的一个实施 例, 而并不用于对本发明的用途和功能等进行任何限制, 本发明也不应解释 为与所示的该视频监控系统的部件中的任一个或组合具有任何相关的要求。 但为了更清楚地阐述本发明, 下文中本发明实施例将以视频监控系统的应用 场景为例进行说明, 但本发明并不限于此。
还应理解, 本发明实施例中的视频数据的传输技术方案可以采用各种通 信网络或通信系统, 例如: 全球移动通讯 ( Global System of Mobile communication, 筒称为 "GSM" ) 系统、 码分多址( Code Division Multiple Access, 筒称为 "CDMA" ) 系统、 宽带码分多址( Wideband Code Division Multiple Access,筒称为 "WCDMA" )系统、通用分组无线业务( General Packet Radio Service, 筒称为 "GPRS" )、 长期演进( Long Term Evolution, 筒称为 "LTE" )系统、 LTE频分双工(Frequency Division Duplex, 筒称为 "FDD" ) 系统、 LTE时分双工(Time Division Duplex, 筒称为 "TDD" )、 通用移动通 信系统 ( Universal Mobile Telecommunication System, 筒称为 "UMTS" )或 全球互联微波接入 ( Worldwide Interoperability for Microwave Access , 筒称为 "WiMAX" )通信系统等, 本发明实施例并不限于此。
图 2示出了根据本发明实施例的播放视频的方法 100的示意性流程图, 该方法 100可以由播放视频的装置执行, 该装置例如为终端或客户端等。 如 图 2所示, 该方法 100包括:
S 110 , 将原始播放画面划分为至少两个感兴趣区域;
S120, 确定该至少两个感兴趣区域中具有触发事件的第一感兴趣区域; S 130 , 获取在该第一感兴趣区域显示的第一视频画面的解码数据; S140, 将该第一视频画面的解码数据渲染到指定播放窗口进行播放。 为了不影响播放视频的画面质量, 并能够提高用户对画面细节的观察效 果, 特别是在同一窗口缩小显示多个视频画面的情况下, 播放视频的装置可 以先将原始播放画面划分成多个感兴趣区域, 并获取具有触发事件的感兴趣 区域显示的视频画面的解码数据,从而播放视频的装置可以将该视频画面的 解码数据渲染到单独的指定播放窗口进行播放, 由此能够以独立的窗口显示 用户感兴趣的画面细节, 并在不影响画面质量的情况下, 提升了用户对画面 细节的观察效果。
因此, 本发明实施例的播放视频的方法, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
应理解, 在本发明实施例中, 视频既可以包括视频文件, 也可以包括实 时的视频流, 本发明实施例仅以播放实时的视频流为例进行说明, 但本发明 实施例并不限于此。
在本发明实施例中, 可选地, 如图 3所示, 该方法 100还包括:
S 150 ,确定该至少两个感兴趣区域中的每个感兴趣区域与指定播放窗口 的对应关系;
其中, 该将该第一视频画面的解码数据渲染到指定播放窗口进行播放, 包括:
S141 , 根据该对应关系, 将该第一视频画面的解码数据渲染到与该第一 感兴趣区域相应的指定播放窗口进行播放。 即对于每个感兴趣区域, 可以关联一个或多个播放窗口, 用于在感兴趣 区域具有触发事件时, 播放该感兴趣区域的画面。 该指定窗口可以是显示装 置的最大播放窗口, 也可以是该最大播放窗口的一部分; 该指定窗口可以是 目前已有的播放窗口或该已有的播放窗口的一部分,也可以是重新弹出或重 新生成的播放窗口, 本发明实施例并不限于此。
下面将结合图 4至图 12B, 详细描述根据本发明实施例的播放视频的方 法。
在 S110中, 可选地, 该将原始播放画面划分为至少两个感兴趣区域, 包括: 采用等分的方式或自由分割的方式, 将该原始播放画面划分为该至少 两个感兴趣区域。
具体而言,针对单个播放窗口,可以在客户端预先划分多个感兴趣区域, 各感兴趣区域的大小可以相等, 也可以不相等, 并且各感兴趣区域可以设置 为不规则区域; 此外, 在本发明实施例中, 还可以确定不同感兴趣区域与播 放窗口的对应关系。 感兴趣区域的划分可以由用户手动进行分割, 也可以通 过客户端软件自动配置, 并由客户端保存配置。
画面的分割可采用等分的方式或自由分割的方式, 具体配置流程例如如 图 4和 5所示。 例如, 如图 4所示, 采用等分方式划分感兴趣区域的方法包 括:
S111 , 点击右键菜单或工具栏按钮, 弹出配置窗口;
S112, 在弹出的配置窗口中, 设置感兴趣区域的数量, 例如设置感兴趣 区域的数量为 16个;
5113, 右键点击感兴趣区域, 以设置该感兴趣区域绑定的播放窗口;
5114, 选择一个播放窗口, 用于在该感兴趣区域具有触发事件时, 播放 该感兴趣区域的视频。
如图 5所示,采用自由分割的方式划分感兴趣区域的方法例如可以包括:
5115, 点击右键菜单或工具栏按钮, 弹出配置窗口;
5116, 在弹出的配置窗口中, 拖动鼠标以画出感兴趣区域, 各感兴趣区 域的大小的形状可以相同, 也可以不同;
5117, 右键点击感兴趣区域, 以设置该感兴趣区域绑定的播放窗口; S118, 选择一个播放窗口, 用于在该感兴趣区域具有触发事件时, 播放 该感兴趣区域的视频。 应理解, 在本发明实施例中, 进行感兴趣区域划分的原始播放画面, 可 以是显示设备的最大播放窗口内的所有播放画面,也可以是该最大部分窗口 内同时播放的多个画面中的一个或多个画面, 本发明实施例并不限于此。
在 S120中, 播放视频的装置确定该至少两个感兴趣区域中具有触发事 件的第一感兴趣区域, 以将该第一感兴趣区域内的图画在单独的播放窗口中 显示, 从而提升画面细节的显示效果。
在本发明实施例中, 可以由用户手动触发事件并确定感兴趣区域, 也可 以检测事件自动产生的触发, 并确定感兴趣区域, 下面将结合图 6和图 7分 别进行描述。
如图 6所示, 可选地, 该确定该至少两个感兴趣区域中具有触发事件的 第一感兴趣区域, 包括:
5121 , 确定用户对该原始播放画面中的感兴趣区域的触发操作, 该触发 操作包括: 单击操作、 双击操作或感兴趣区域的选定操作;
5122, 将具有触发操作的感兴趣区域确定为该第一感兴趣区域。
具体而言, 用户在查看视频时, 如果发现有事件发生, 可以通过操作客 户端界面, 例如对该原始播放画面中的感兴趣区域进行触发操作, 以将事件 所在的感兴趣区域画面播放到事先指定的播放窗口,或者通过弹出的独立播 放窗口进行显示; 当多个感兴趣区域具有事件发生时, 可以触发多个窗口进 行显示。 该触发操作例如为: 对感兴趣区域的单击操作、 双击操作或选定操 作等, 本发明实施例并不限于此。
图 7示出了根据本发明实施例的确定具有触发事件的感兴趣区域的方法 的另一示意性流程图。 如图 7所示, 可选地, 该确定该至少两个感兴趣区域 中具有触发事件的第一感兴趣区域, 包括:
5123 , 获取该原始播放画面内的触发事件发生点的坐标元数据; S124, 根据该坐标元数据, 将该触发事件发生点所属的感兴趣区域确定 为该第一感兴趣区域。
具体而言, 例如, 用户可以预先配置需要自动检测事件的区域, 并配置 事件检测规则, 例如移动侦测或智能分析检测等。 当事件发生时, 客户端软 件可以根据触发事件发生点的坐标元数据,确定对应的预先配置的感兴趣区 域, 从而能够将对应的画面播放到事先指定的播放窗口, 或者通过弹出的独 立播放窗口进行显示; 当多个感兴趣区域具有事件发生时, 可以触发多个窗 口进行显示。
应理解, 对于不规则的智能分析区域, 触发事件可能覆盖多个感兴趣区 域,此时,可以将该多个感兴趣区域确定为具有触发事件的第一感兴趣区域, 本发明实施例并不限于此。
在本发明实施例中,播放视频的装置可以通过移动侦测或智能分析检测 等手段,确定感兴趣区域内是否具有触发事件;中心服务器也可以进行检测, 以确定感兴趣区域内是否具有触发事件, 并在检测到触发事件时, 可以将触 发事件发生点的坐标元数据反馈给播放视频的装置, 以使得播放视频的装置 可以根据该坐标元数据, 并确定具有触发事件的第一感兴趣区域, 本发明实 施例并不限于此。
在 S130中, 播放视频的装置获取在该第一感兴趣区域显示的第一视频 画面的解码数据, 以便于以指定的播放窗口播放第一视频画面。
在本发明实施例中, 可选地, 如图 8所示, 该获取在该第一感兴趣区域 显示的第一视频画面的解码数据, 包括:
S131 , 获取该原始播放画面的解码数据;
S132, 根据该原始播放画面的解码数据, 确定该第一视频画面的解码数 据。
具体而言, 例如播放视频的装置接收用户手动触发的事件, 包括鼠标单 击、 双击、 点击工具栏按钮或快捷键触发等, 或装置根据坐标元数据, 确定 具有触发事件的第一感兴趣区域之后, 该装置可以从原始播放窗口的解码后 的 YUV数据中, 截取属于该感兴趣区域的数据内容, 并可以根据预先配置 的对应关系, 将该部分内容在事先制定的播放窗格播放(或弹出独立的播放 窗口播放), 由于多个播放窗口都是使用同一个 YUV数据源, 因此, 该装置 不需要引出或增加额外的多股视频流。
例如, 叚设原始播放画面的分辨率为 Width X Height, 感兴趣区域的起 始点横坐标为 StartX、 纵坐标为 StartY, 结束点横坐标为 EndX、 纵坐标为 EndY, 原始播放画面的 YUV数据在数组 Org[Width Height]中, 感兴趣区 域的 YUV数据在 Dst[ROIWidth X ROIHeight]中, n为该感兴趣区域中的任 意一个点, 则根据下面的等式可以确定感兴趣区域的 YUV数据:
ROIWidth = EndX-StartX;
ROIHeight = EndY-StartY; Dst[n] = Org[(Width (StartY+n / ROIWidth)+StartX+n%ROIWidth)]; 其中, 除法运算 " 表示向下取整, 符号 为取余运算。
在 S140中, 播放视频的装置将该第一视频画面的解码数据渲染到指定 播放窗口进行播放。
具体而言, 播放视频的装置可以在弹出的窗口播放该第一视频画面, 也 可以在新播放窗口显示该第一视频画面,还可以在原播放窗口显示该第一视 频画面,并且可以对该第一视频画面进行数字缩放,以适应播放窗口的大小。 即在本发明实施例中, 指定窗口可以是显示装置的最大播放窗口, 也可以是 该最大播放窗口的一部分; 该指定窗口可以是目前已有的播放窗口或该已有 的播放窗口的一部分, 也可以是重新弹出或重新生成的播放窗口; 该指定窗 口可以是一个窗口, 也可以是一个以上的窗口, 本发明实施例并不限于此。
在本发明实施例中, 可选地, 该将该第一视频画面的解码数据渲染到指 定播放窗口进行播放, 包括:
将该第一视频画面的解码数据渲染到该指定播放窗口进行放大播放, 其 中该指定播放窗口大于该第一感兴趣区域。
在本发明实施例中, 例如, 如图 9所示, 该将该第一视频画面的解码数 据渲染到指定播放窗口进行播放, 包括:
5142, 弹出独立播放窗口;
5143 , 将该第一视频画面的解码数据渲染到该独立播放窗口进行播放。 应理解, 在本发明实施例中, 该指定窗口即为弹出的独立播放窗口, 该 弹出的独立播放窗口可以大于该第一感兴趣区域, 以对第一视频画面进行放 大播放, 但本发明实施例并不限于此。 例如, 该独立播放窗口也可以小于或 等于该第一感兴趣区域。
应理解, 在本发明实施例中, "与 A相应的 B"表示 B与 A相关联, 根 据 A可以确定 但还应理解, 根据 A确定 B并不意味着仅仅根据 A确定
B, 还可以根据 A和 /或其它信息确定^
应理解, 在本发明的各种实施例中, 上述各过程的序号的大小并不意味 着执行顺序的先后, 各过程的执行顺序应以其功能和内在逻辑确定, 而不应 对本发明实施例的实施过程构成任何限定。
因此, 本发明实施例的播放视频的方法, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够更清晰且更直观地观察感兴趣区域的画面细节, 另一方面也 使得用户能够同时跟踪多个感兴趣区域的画面细节,从而能够显著提高用户 体验; 此外, 本发明实施例使用原始的解码数据播放感兴趣区域的画面, 不 会增加额外的视频流。
下面将结合图 10至图 12B, 详细描述根据本发明实施例的播放视频的 方法。
如图 10所示, 该播放视频的方法 200可以由播放视频的装置执行, 该 装置例如为终端或客户端, 该方法 200可以包括:
5201 , 显示客户端的图形用户界面 (Graphical User Interface, 筒称为 "GUI" );
5202, 确定是否设置播放画面的分割方式, 如果是则流程进行到 S203, 否则流程进行到 S204;
5203, 设置播放画面的分割方式, 并且流程进行到 S204;
5204, 确定用户是否启动播放, 如果是则流程进行到 S205, 否则流程 进行到 S201;
5205, 打开网络端口;
5206,接收媒体流,并对媒体流进行解码, 以渲染到显示装置进行显示;
5207, 确定用户是否手动触发事件, 如果是在流程进行到 S208, 否则 流程进行到 S209;
S208, 在确定用户手动触发事件时, 将事件发生区域在指定窗口进行放 大显示, 并且流程进行到 S206;
5209, 确定装置是否自动触发事件, 如果是在流程进行到 S210, 否则 流程进行到 S211;
5210, 在确定装置自动触发事件时, 将事件发生区域在指定窗口进行放 大显示, 并且流程进行到 S206;
5211 , 确定用户是否结束播放, 如果是则流程进行到 S212, 否则流程 进行到 S206;
5212, 确定用户是否关闭客户端, 如果是则流程进行到 S213 , 否则流 程进行到 S201;
S213, 清理系统资源, 视频播放结束。
应理解, 在本发明的各种实施例中, 上述各过程的序号的大小并不意味 着执行顺序的先后, 各过程的执行顺序应以其功能和内在逻辑确定, 而不应 对本发明实施例的实施过程构成任何限定。
图 11A示出了根据本发明实施例的手动触发感兴趣区域的播放方法 300 的示意性流程图, 该方法 300可以由播放视频的装置执行, 该装置例如为终 端或客户端。 如图 11A所示, 该方法 300可以包括:
5301 , 对每一帧视频画面, 在原始播放窗口进行正常渲染播放;
5302, 确定用户是否手动触发事件, 如果是则流程进行到 S303 , 否则 流程进行到 S301 ;
5303 , 获取用户事件所在的感兴趣区域;
S304, 检查感兴趣区域所绑定的播放窗口;
5305 , 对每一帧视频画面, 计算感兴趣区域所覆盖的视频画面的 YUV 数据;
5306, 对每一帧视频画面, 将感兴趣区域 YUV数据渲染到指定的播放 窗口进行播放; 例如, 如图 12A所示, 整个播放窗口包括一个原始播放画面 窗口, 以及三个与原始播放画面窗口大小相同的指定播放窗口, 其中, 原始 播放画面被划分为 16个感兴趣区域, 具有手动触发事件的感兴趣区域的画 面在其中一个指定播放窗口中被放大播放;
5307 , 确定用户是否结束播放, 如果是在流程进行到 S308, 否则流程 进行到 S305;
S308, 停止视频播放, 流程结束。
图 11B 示出了根据本发明实施例的事件自动触发感兴趣区域的播放方 法 400的示意性流程图, 该方法 400可以包括:
5401 , 对每一帧视频画面, 在原始播放窗口进行正常渲染播放;
5402, 进行智能分析, 以确定是否具有触发事件, 如果是则流程进行到 S403 , 否则流程进行到 S401 ;
5403 , 计算智能分析区域和感兴趣区域的对应关系;
5404, 获取分析事件所覆盖的感兴趣区域(可能多个);
5405 , 检查感兴趣区域所绑定的播放窗口;
5406, 对每一帧视频画面, 计算感兴趣区域所覆盖的视频画面的 YUV 数据;
5407, 对每一帧视频画面, 将感兴趣区域 YUV数据渲染到指定的播放 窗口进行播放; 例如, 如图 12B所示, 整个播放窗口包括一个原始播放画面 窗口, 以及三个与原始播放画面窗口大小相同的指定播放窗口, 其中, 原始 播放画面被划分为 16个感兴趣区域, 具有触发事件的感兴趣区域的画面在 其中一个指定播放窗口中被放大播放;
S408 , 确定用户是否结束播放, 如果是在流程进行到 S409, 否则流程 进行到 S406;
S409, 停止视频播放, 流程结束。
应理解, 在本发明的各种实施例中, 上述各过程的序号的大小并不意味 着执行顺序的先后, 各过程的执行顺序应以其功能和内在逻辑确定, 而不应 对本发明实施例的实施过程构成任何限定。
因此, 本发明实施例的播放视频的方法, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
上文中结合图 1至图 12B , 详细描述了根据本发明实施例的播放视频的 方法, 下面将结合图 13至图 20, 详细描述根据本发明实施例的播放视频的 终端和系统。
图 13示出了根据本发明实施例的终端 500的示意性框图。如图 13所示, 该终端 500包括:
划分模块 510, 用于将原始播放画面划分为至少两个感兴趣区域; 第一确定模块 520, 用于确定该划分模块 510划分的该至少两个感兴趣 区域中具有触发事件的第一感兴趣区域;
获取模块 530, 用于获取在该第一确定模块 520确定的该第一感兴趣区 域显示的第一视频画面的解码数据;
播放模块 540, 用于将该获取模块 530获取的该第一视频画面的解码数 据渲染到指定播放窗口进行播放。
因此, 本发明实施例的播放视频的终端, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
应理解, 在本发明实施例中, 播放视频的终端既可以播放视频文件, 也 可以播放实时的视频流, 本发明实施例仅以终端播放实时的视频流为例进行 说明, 但本发明实施例并不限于此。
在本发明实施例中, 可选地, 如图 14所示, 该终端 500还包括: 第二确定模块 550, 用于确定该至少两个感兴趣区域中的每个感兴趣区 域与指定播放窗口的对应关系;
其中, 该播放模块 540还用于: 根据该第二确定模块 550确定的该对应 关系,将该获取模块 530获取的该第一视频画面的解码数据渲染到与该第一 感兴趣区域相应的指定播放窗口进行播放。
在本发明实施例中, 可选地, 该划分模块 510还用于: 采用等分的方式 或自由分割的方式, 将该原始播放画面划分为该至少两个感兴趣区域。
在本发明实施例中, 可选地, 如图 15所示, 该第一确定模块 520包括: 第一确定单元 521 , 用于确定用户对该原始播放画面中的感兴趣区域的 触发操作,该触发操作包括: 单击操作、双击操作或感兴趣区域的选定操作; 第二确定单元 522, 用于将具有该第一确定单元 521确定的触发操作的 感兴趣区域确定为该第一感兴趣区域。
在本发明实施例中, 可选地, 如图 16所示, 该第一确定模块 520包括: 第一获取单元 523 , 用于获取该原始播放画面内的触发事件发生点的坐 标元数据;
第三确定单元 524,用于根据该第一获取单元 523获取的该坐标元数据, 将该触发事件发生点所属的感兴趣区域确定为该第一感兴趣区域。
在本发明实施例中, 可选地, 如图 17所示, 该获取模块 530包括: 第二获取单元 531 , 用于获取该原始播放画面的解码数据;
第三确定单元 532, 用于根据该第二获取单元 531获取的该原始播放画 面的解码数据, 确定该第一视频画面的解码数据。
在本发明实施例中, 可选地, 该播放模块还用于: 将该第一视频画面的 解码数据渲染到该指定播放窗口进行放大播放, 其中该指定播放窗口大于该 第一感兴趣区域。
在本发明实施例中, 可选地, 如图 18所示, 该播放模块 540包括: 弹出单元 541 , 用于弹出独立播放窗口;
播放单元 542, 用于将该第一视频画面的解码数据渲染到该弹出单元弹 出的该独立播放窗口进行播放。 应理解,根据本发明实施例的播放视频的终端 500可对应于本发明实施 例中的播放视频的装置, 并且终端 500 中的各个模块的上述和其它操作和 / 或功能分别为了实现图 1至图 12B中的各个方法 100至 400的相应流程,为 了筒洁, 在此不再赘述。
因此, 本发明实施例的播放视频的终端, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
图 19示出了根据本发明实施例的系统 600的示意性框图。如图 19所示, 该系统 600包括:
根据本发明实施例所述的终端 610;
视频采集系统 620, 用于采集视频图像, 并通过将该视频图像编码而生 成媒体流;
服务器 630, 用于获取该视频采集系统生成的该媒体流, 以及向该终端 620提供该媒体流; 和
存储设备 640, 用于存储该服务器 630获取的该媒体流。
应理解,根据本发明实施例的播放视频的系统 600包括的终端 610可对 应于本发明实施例中的播放视频的终端 500, 并且终端 610中的各个模块的 上述和其它操作和 /或功能分别为了实现图 1至图 12B中的各个方法 100至 400的相应流程, 为了筒洁, 在此不再赘述。
因此, 本发明实施例的播放视频的系统, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
本发明实施例还提供了一种播放视频的终端,如图 20所示,该终端 700 包括: 处理器 710、 存储器 720和总线系统 730, 其中, 处理器 710和存储 器 720通过总线系统 730相连, 该存储器 720用于存储指令, 该处理器 710 用于执行该存储器 720存储的指令, 其中, 该处理器 710用于: 将原始播放 画面划分为至少两个感兴趣区域; 确定该至少两个感兴趣区域中具有触发事 件的第一感兴趣区域; 获取在该第一感兴趣区域显示的第一视频画面的解码 数据; 将该第一视频画面的解码数据渲染到指定播放窗口进行播放。 因此, 本发明实施例的播放视频的终端, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
应理解,在本发明实施例中,该处理器 710可以是中央处理单元(Central
Processing Unit, 筒称为 "CPU" ), 该处理器 710还可以是其他通用处理器、 数字信号处理器(DSP )、专用集成电路(ASIC )、现成可编程门阵列(FPGA ) 或者其他可编程逻辑器件、 分立门或者晶体管逻辑器件、 分立硬件组件等。 通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器 720可以包括只读存储器和随机存取存储器, 并向处理器 710 提供指令和数据。存储器 720的一部分还可以包括非易失性随机存取存储器。 例如, 存储器 720还可以存储设备类型的信息。
该总线系统 730除包括数据总线之外, 还可以包括电源总线、 控制总线 和状态信号总线等。 但是为了清楚说明起见, 在图中将各种总线都标为总线 系统 730。
在实现过程中,上述方法的各步骤可以通过处理器 710中的硬件的集成 逻辑电路或者软件形式的指令完成。结合本发明实施例所公开的方法的步骤 可以直接体现为硬件处理器执行完成, 或者用处理器中的硬件及软件模块组 合执行完成。 软件模块可以位于随机存储器, 闪存、 只读存储器, 可编程只 读存储器或者电可擦写可编程存储器、 寄存器等本领域成熟的存储介质中。 该存储介质位于存储器 720, 处理器 710读取存储器 720中的信息, 结合其 硬件完成上述方法的步骤。 为避免重复, 这里不再详细描述。
可选地, 作为一个实施例, 处理器 710还用于: 确定该至少两个感兴趣 区域中的每个感兴趣区域与指定播放窗口的对应关系; 处理器 710将该第一 视频画面的解码数据渲染到指定播放窗口进行播放,包括:根据该对应关系, 将该第一视频画面的解码数据渲染到与该第一感兴趣区域相应的指定播放 窗口进行播放。
可选地, 作为一个实施例, 处理器 710该将原始播放画面划分为至少两 个感兴趣区域, 包括: 采用等分的方式或自由分割的方式, 将该原始播放画 面划分为该至少两个感兴趣区域。
可选地, 作为一个实施例, 处理器 710确定该至少两个感兴趣区域中具 有触发事件的第一感兴趣区域, 包括: 确定用户对该原始播放画面中的感兴 趣区域的触发操作, 该触发操作包括: 单击操作、 双击操作或感兴趣区域的 选定操作; 将具有触发操作的感兴趣区域确定为该第一感兴趣区域。
可选地, 作为一个实施例, 处理器 710确定该至少两个感兴趣区域中具 有触发事件的第一感兴趣区域, 包括: 获取该原始播放画面内的触发事件发 生点的坐标元数据; 根据该坐标元数据, 将该触发事件发生点所属的感兴趣 区域确定为该第一感兴趣区域。
可选地, 作为一个实施例, 处理器 710获取在该第一感兴趣区域显示的 第一视频画面的解码数据, 包括: 获取该原始播放画面的解码数据; 根据该 原始播放画面的解码数据, 确定该第一视频画面的解码数据。
可选地, 作为一个实施例, 该终端 700还用于: 将该第一视频画面的解 码数据渲染到该指定播放窗口进行放大播放, 其中该指定播放窗口大于该第 一感兴趣区域。
可选地, 作为一个实施例, 该终端 700还包括显示器 740, 其中, 处理 器 710将该第一视频画面的解码数据渲染到指定播放窗口进行播放, 包括: 弹出独立播放窗口; 该显示器 740用于将该第一视频画面的解码数据渲染到 该独立播放窗口进行播放。
应理解,根据本发明实施例的播放视频的终端 700可对应于本发明实施 例中的播放视频的终端 500或终端 610, 并且终端 700中的各个模块的上述 和其它操作和 /或功能分别为了实现图 1至图 12B中的各个方法 100至 400 的相应流程, 为了筒洁, 在此不再赘述。
因此, 本发明实施例的播放视频的终端, 通过将原始播放画面划分成多 个感兴趣区域, 并将具有触发事件的感兴趣区域的画面单独进行显示, 一方 面使得用户能够观察到更清晰的感兴趣区域的画面细节, 另一方面也使得用 户能够同时跟踪多个感兴趣区域的画面细节, 从而能够显著提高用户体验。
本领域普通技术人员可以意识到, 结合本文中所公开的实施例描述的各 示例的单元及算法步骤, 能够以电子硬件、 计算机软件或者二者的结合来实 现, 为了清楚地说明硬件和软件的可互换性, 在上述说明中已经按照功能一 般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执 行, 取决于技术方案的特定应用和设计约束条件。 专业技术人员可以对每个 特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超 出本发明的范围。
所属领域的技术人员可以清楚地了解到, 为了描述的方便和筒洁, 上述 描述的系统、 装置和单元的具体工作过程, 可以参考前述方法实施例中的对 应过程, 在此不再赘述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的系统、 装置和 方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示 意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可 以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一个 系统, 或一些特征可以忽略, 或不执行。 另外, 所显示或讨论的相互之间的 耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或 通信连接, 也可以是电的, 机械的或其它的形式连接。 为单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或 者全部单元来实现本发明实施例方案的目的。
另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元 中, 也可以是各个单元单独物理存在, 也可以是两个或两个以上单元集成在 一个单元中。 上述集成的单元既可以采用硬件的形式实现, 也可以采用软件 功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销 售或使用时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明的技术方案本质上或者说对现有技术做出贡献的部分, 或者该技术方 案的全部或部分可以以软件产品的形式体现出来, 该计算机软件产品存储在 一个存储介质中, 包括若干指令用以使得一台计算机设备(可以是个人计算 机, 服务器, 或者网络设备等)执行本发明各个实施例所述方法的全部或部 分步骤。 而前述的存储介质包括: U盘、 移动硬盘、 只读存储器(ROM, Read-Only Memory )、 随机存取存储器 ( RAM, Random Access Memory )、 磁碟或者光盘等各种可以存储程序代码的介质。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局限 于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易 想到各种等效的修改或替换, 这些修改或替换都应涵盖在本发明的保护范围 之内。 因此, 本发明的保护范围应以权利要求的保护范围为准。

Claims

权利要求
1、 一种播放视频的方法, 其特征在于, 包括:
将原始播放画面划分为至少两个感兴趣区域;
确定所述至少两个感兴趣区域中具有触发事件的第一感兴趣区域; 获取在所述第一感兴趣区域显示的第一视频画面的解码数据; 将所述第一视频画面的解码数据渲染到指定播放窗口进行播放。
2、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括: 确定所述至少两个感兴趣区域中的每个感兴趣区域与指定播放窗口的 对应关系;
所述将所述第一视频画面的解码数据渲染到指定播放窗口进行播放, 包 括:
根据所述对应关系,将所述第一视频画面的解码数据渲染到与所述第一 感兴趣区域相应的指定播放窗口进行播放。
3、 根据权利要求 1或 2所述的方法, 其特征在于, 所述确定所述至少 两个感兴趣区域中具有触发事件的第一感兴趣区域, 包括:
确定用户对所述原始播放画面中的感兴趣区域的触发操作, 所述触发操 作包括: 单击操作、 双击操作或感兴趣区域的选定操作;
将具有触发操作的感兴趣区域确定为所述第一感兴趣区域。
4、 根据权利要求 1或 2所述的方法, 其特征在于, 所述确定所述至少 两个感兴趣区域中具有触发事件的第一感兴趣区域, 包括:
获取所述原始播放画面内的触发事件发生点的坐标元数据;
根据所述坐标元数据,将所述触发事件发生点所属的感兴趣区域确定为 所述第一感兴趣区域。
5、 根据权利要求 1至 4中任一项所述的方法, 其特征在于, 所述获取 在所述第一感兴趣区域显示的第一视频画面的解码数据, 包括:
获取所述原始播放画面的解码数据;
根据所述原始播放画面的解码数据, 确定所述第一视频画面的解码数 据。
6、 根据权利要求 1至 5中任一项所述的方法, 其特征在于, 所述将所 述第一视频画面的解码数据渲染到指定播放窗口进行播放, 包括:
将所述第一视频画面的解码数据渲染到所述指定播放窗口进行放大播 放, 其中所述指定播放窗口大于所述第一感兴趣区域。
7、 根据权利要求 1至 6中任一项所述的方法, 其特征在于, 所述将所 述第一视频画面的解码数据渲染到指定播放窗口进行播放, 包括:
弹出独立播放窗口;
将所述第一视频画面的解码数据渲染到所述独立播放窗口进行播放。
8、 一种播放视频的终端, 其特征在于, 包括:
划分模块, 用于将原始播放画面划分为至少两个感兴趣区域;
第一确定模块,用于确定所述划分模块划分的所述至少两个感兴趣区域 中具有触发事件的第一感兴趣区域;
获取模块, 用于获取在所述第一确定模块确定的所述第一感兴趣区域显 示的第一视频画面的解码数据;
播放模块, 用于将所述获取模块获取的所述第一视频画面的解码数据渲 染到指定播放窗口进行播放。
9、 根据权利要求 8所述的终端, 其特征在于, 所述终端还包括: 第二确定模块,用于确定所述至少两个感兴趣区域中的每个感兴趣区域 与指定播放窗口的对应关系;
其中, 所述播放模块还用于: 根据所述第二确定模块确定的所述对应关 系,将所述获取模块获取的所述第一视频画面的解码数据渲染到与所述第一 感兴趣区域相应的指定播放窗口进行播放。
10、 根据权利要求 8或 9所述的终端, 其特征在于, 所述第一确定模块 包括:
第一确定单元,用于确定用户对所述原始播放画面中的感兴趣区域的触 发操作,所述触发操作包括: 单击操作、双击操作或感兴趣区域的选定操作; 第二确定单元,用于将具有所述第一确定单元确定的触发操作的感兴趣 区域确定为所述第一感兴趣区域。
11、 根据权利要求 8或 9所述的终端, 其特征在于, 所述第一确定模块 包括:
第一获取单元,用于获取所述原始播放画面内的触发事件发生点的坐标 元数据;
第三确定单元, 用于根据所述第一获取单元获取的所述坐标元数据, 将 所述触发事件发生点所属的感兴趣区域确定为所述第一感兴趣区域。
12、 根据权利要求 8至 11 中任一项所述的终端, 其特征在于, 所述获 取模块包括:
第二获取单元, 用于获取所述原始播放画面的解码数据;
第三确定单元,用于根据所述第二获取单元获取的所述原始播放画面的 解码数据, 确定所述第一视频画面的解码数据。
13、 根据权利要求 8至 12中任一项所述的终端, 其特征在于, 所述播 放模块还用于: 将所述第一视频画面的解码数据渲染到所述指定播放窗口进 行放大播放, 其中所述指定播放窗口大于所述第一感兴趣区域。
14、 根据权利要求 8至 13中任一项所述的终端, 其特征在于, 所述播 放模块包括:
弹出单元, 用于弹出独立播放窗口;
播放单元, 用于将所述第一视频画面的解码数据渲染到所述弹出单元弹 出的所述独立播放窗口进行播放。
15、 一种播放视频的系统, 其特征在于, 包括:
根据权利要求 8至 14中任一项所述的终端;
视频采集系统, 用于采集视频图像, 并通过将所述视频图像编码而生成 媒体流;
服务器, 用于获取所述视频采集系统生成的所述媒体流, 以及向所述终 端提供所述媒体流; 和
存储设备, 用于存储所述服务器获取的所述媒体流。
PCT/CN2012/087391 2012-12-25 2012-12-25 播放视频的方法、终端和系统 WO2014100966A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CN201280005842.5A CN104081760B (zh) 2012-12-25 2012-12-25 播放视频的方法、终端和系统
JP2015549913A JP2016506167A (ja) 2012-12-25 2012-12-25 ビデオ再生方法、端末、およびシステム
PCT/CN2012/087391 WO2014100966A1 (zh) 2012-12-25 2012-12-25 播放视频的方法、终端和系统
CN201711339410.9A CN108401134A (zh) 2012-12-25 2012-12-25 播放视频的方法、终端和系统
KR1020157017260A KR101718373B1 (ko) 2012-12-25 2012-12-25 비디오 재생 방법, 단말 및 시스템
EP12878638.1A EP2768216A4 (en) 2012-12-25 2012-12-25 VIDEO GAME PROCEDURE, END USER AND SYSTEM
US14/108,180 US9064393B2 (en) 2012-12-25 2013-12-16 Video playback method, terminal, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/087391 WO2014100966A1 (zh) 2012-12-25 2012-12-25 播放视频的方法、终端和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/108,180 Continuation US9064393B2 (en) 2012-12-25 2013-12-16 Video playback method, terminal, and system

Publications (1)

Publication Number Publication Date
WO2014100966A1 true WO2014100966A1 (zh) 2014-07-03

Family

ID=50974785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/087391 WO2014100966A1 (zh) 2012-12-25 2012-12-25 播放视频的方法、终端和系统

Country Status (6)

Country Link
US (1) US9064393B2 (zh)
EP (1) EP2768216A4 (zh)
JP (1) JP2016506167A (zh)
KR (1) KR101718373B1 (zh)
CN (2) CN104081760B (zh)
WO (1) WO2014100966A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113286196A (zh) * 2021-05-14 2021-08-20 湖北亿咖通科技有限公司 一种车载视频播放系统及视频分屏显示方法及装置

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9501915B1 (en) 2014-07-07 2016-11-22 Google Inc. Systems and methods for analyzing a video stream
US9779307B2 (en) 2014-07-07 2017-10-03 Google Inc. Method and system for non-causal zone search in video monitoring
US9449229B1 (en) 2014-07-07 2016-09-20 Google Inc. Systems and methods for categorizing motion event candidates
US9158974B1 (en) 2014-07-07 2015-10-13 Google Inc. Method and system for motion vector-based video monitoring and event categorization
US10127783B2 (en) 2014-07-07 2018-11-13 Google Llc Method and device for processing motion events
US10140827B2 (en) 2014-07-07 2018-11-27 Google Llc Method and system for processing motion event notifications
USD782495S1 (en) 2014-10-07 2017-03-28 Google Inc. Display screen or portion thereof with graphical user interface
US9361011B1 (en) 2015-06-14 2016-06-07 Google Inc. Methods and systems for presenting multiple live video feeds in a user interface
CN105872816A (zh) * 2015-12-18 2016-08-17 乐视网信息技术(北京)股份有限公司 一种放大视频图像的方法及装置
EP3475785A4 (en) 2016-04-22 2020-05-13 SZ DJI Technology Co., Ltd. SYSTEMS AND METHODS FOR PROCESSING IMAGE DATA BASED ON A USER'S INTEREST
US10506237B1 (en) 2016-05-27 2019-12-10 Google Llc Methods and devices for dynamic adaptation of encoding bitrate for video streaming
US10957171B2 (en) 2016-07-11 2021-03-23 Google Llc Methods and systems for providing event alerts
US10192415B2 (en) 2016-07-11 2019-01-29 Google Llc Methods and systems for providing intelligent alerts for events
US10380429B2 (en) 2016-07-11 2019-08-13 Google Llc Methods and systems for person detection in a video feed
CN106802759A (zh) 2016-12-21 2017-06-06 华为技术有限公司 视频播放的方法及终端设备
US11783010B2 (en) 2017-05-30 2023-10-10 Google Llc Systems and methods of person recognition in video streams
US10599950B2 (en) 2017-05-30 2020-03-24 Google Llc Systems and methods for person recognition data management
CN107580228B (zh) * 2017-09-15 2020-12-22 威海元程信息科技有限公司 一种监控视频处理方法、装置及设备
US11134227B2 (en) 2017-09-20 2021-09-28 Google Llc Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment
US10664688B2 (en) 2017-09-20 2020-05-26 Google Llc Systems and methods of detecting and responding to a visitor to a smart home environment
TWI657697B (zh) 2017-12-08 2019-04-21 財團法人工業技術研究院 搜尋視訊事件之方法、裝置、及電腦可讀取記錄媒體
US11012750B2 (en) * 2018-11-14 2021-05-18 Rohde & Schwarz Gmbh & Co. Kg Method for configuring a multiviewer as well as multiviewer
CN112866625A (zh) * 2019-11-12 2021-05-28 杭州海康威视数字技术股份有限公司 一种监控视频的显示方法、装置、电子设备及存储介质
US11893795B2 (en) 2019-12-09 2024-02-06 Google Llc Interacting with visitors of a connected home environment
CN111258484A (zh) * 2020-02-12 2020-06-09 北京奇艺世纪科技有限公司 一种视频播放方法、装置、电子设备及存储介质
CN111491203B (zh) * 2020-03-16 2023-01-24 浙江大华技术股份有限公司 视频回放方法、装置、设备和计算机可读存储介质
CN111752655A (zh) * 2020-05-13 2020-10-09 西安万像电子科技有限公司 数据处理系统及方法
CN111757162A (zh) * 2020-06-19 2020-10-09 广州博冠智能科技有限公司 一种高清视频播放方法、装置、设备及存储介质
CN112235626B (zh) * 2020-10-15 2023-06-13 Oppo广东移动通信有限公司 视频渲染方法、装置、电子设备及存储介质
CN117459682A (zh) * 2020-11-09 2024-01-26 西安万像电子科技有限公司 图像传输方法、装置和系统
US11800179B2 (en) 2020-12-03 2023-10-24 Alcacruz Inc. Multiview video with one window based on another
CN112911384A (zh) * 2021-01-20 2021-06-04 三星电子(中国)研发中心 视频播放方法和视频播放装置
KR102500923B1 (ko) * 2021-06-03 2023-02-17 주식회사 지미션 스트림 영상 재생 장치 및 스트림 영상 재생 시스템
CN114339371A (zh) * 2021-12-30 2022-04-12 咪咕音乐有限公司 视频显示方法、装置、设备及存储介质
CN114666668B (zh) * 2022-03-18 2024-03-15 上海艺赛旗软件股份有限公司 一种视频回放方法、系统、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1901642A (zh) * 2005-07-20 2007-01-24 英业达股份有限公司 视频浏览系统及方法
CN101365117A (zh) * 2008-09-18 2009-02-11 中兴通讯股份有限公司 一种自定义分屏模式的方法
CN101540858A (zh) * 2008-03-18 2009-09-23 索尼株式会社 图像处理装置、方法以及记录介质

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724475A (en) * 1995-05-18 1998-03-03 Kirsten; Jeff P. Compressed digital video reload and playback system
JPH0918849A (ja) * 1995-07-04 1997-01-17 Matsushita Electric Ind Co Ltd 撮影装置
US20110058036A1 (en) * 2000-11-17 2011-03-10 E-Watch, Inc. Bandwidth management and control
JP2004120341A (ja) * 2002-09-26 2004-04-15 Riosu Corp:Kk 映像監視システム
JP2006033380A (ja) * 2004-07-15 2006-02-02 Hitachi Kokusai Electric Inc 監視システム
JP2005094799A (ja) * 2004-11-15 2005-04-07 Chuo Electronics Co Ltd 映像集約表示装置
US8228372B2 (en) * 2006-01-06 2012-07-24 Agile Sports Technologies, Inc. Digital video editing system
JP4714039B2 (ja) * 2006-02-27 2011-06-29 株式会社東芝 映像再生装置及び映像再生方法
WO2008103929A2 (en) * 2007-02-23 2008-08-28 Johnson Controls Technology Company Video processing systems and methods
JP2009100259A (ja) * 2007-10-17 2009-05-07 Mitsubishi Electric Corp 監視カメラおよび画像監視システム
JP4921338B2 (ja) * 2007-12-14 2012-04-25 株式会社日立製作所 プラント監視制御システム
KR101009881B1 (ko) * 2008-07-30 2011-01-19 삼성전자주식회사 재생되는 영상의 타겟 영역을 확대 디스플레이하기 위한장치 및 방법
KR100883632B1 (ko) * 2008-08-13 2009-02-12 주식회사 일리시스 고해상도 카메라를 이용한 지능형 영상 감시 시스템 및 그 방법
JP4715909B2 (ja) * 2008-12-04 2011-07-06 ソニー株式会社 画像処理装置及び方法、画像処理システム、並びに、画像処理プログラム
CN101616281A (zh) * 2009-06-26 2009-12-30 中兴通讯股份有限公司南京分公司 一种将手机电视播放画面局部放大的方法及移动终端
KR20110023634A (ko) * 2009-08-31 2011-03-08 (주)아이디스 썸네일 이미지 생성 장치 및 이를 이용한 썸네일 이미지 출력 방법
US20110316697A1 (en) * 2010-06-29 2011-12-29 General Electric Company System and method for monitoring an entity within an area
CN101951493A (zh) * 2010-09-25 2011-01-19 中兴通讯股份有限公司 移动终端及其视频通话中对远端图像局部放大方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1901642A (zh) * 2005-07-20 2007-01-24 英业达股份有限公司 视频浏览系统及方法
CN101540858A (zh) * 2008-03-18 2009-09-23 索尼株式会社 图像处理装置、方法以及记录介质
CN101365117A (zh) * 2008-09-18 2009-02-11 中兴通讯股份有限公司 一种自定义分屏模式的方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113286196A (zh) * 2021-05-14 2021-08-20 湖北亿咖通科技有限公司 一种车载视频播放系统及视频分屏显示方法及装置
CN113286196B (zh) * 2021-05-14 2023-02-17 亿咖通(湖北)技术有限公司 一种车载视频播放系统及视频分屏显示方法及装置

Also Published As

Publication number Publication date
EP2768216A4 (en) 2015-10-28
CN108401134A (zh) 2018-08-14
CN104081760A (zh) 2014-10-01
JP2016506167A (ja) 2016-02-25
KR101718373B1 (ko) 2017-03-21
EP2768216A1 (en) 2014-08-20
US20140178033A1 (en) 2014-06-26
KR20150090223A (ko) 2015-08-05
CN104081760B (zh) 2018-01-09
US9064393B2 (en) 2015-06-23

Similar Documents

Publication Publication Date Title
WO2014100966A1 (zh) 播放视频的方法、终端和系统
WO2017107441A1 (zh) 截取视频动画的方法及装置
JP6438598B2 (ja) ビデオ画像の上に情報を表示するための方法及びデバイス
EP3526964B1 (en) Masking in video stream
US7839434B2 (en) Video communication systems and methods
US9275604B2 (en) Constant speed display method of mobile device
US9196306B2 (en) Smart scaling and cropping
US10430456B2 (en) Automatic grouping based handling of similar photos
WO2021031850A1 (zh) 图像处理的方法、装置、电子设备及存储介质
WO2020062684A1 (zh) 视频处理方法、装置、终端和介质
WO2022121731A1 (zh) 图像拍摄方法、装置、电子设备和可读存储介质
US10692532B2 (en) Systems and methods for video synopses
JP7126539B2 (ja) ビデオ再生方法、端末、およびシステム
JP2019110545A (ja) ビデオ再生方法、端末、およびシステム
JP2004179881A5 (zh)
JP2023536365A (ja) ビデオ処理方法及び装置
CN111835955B (zh) 一种数据获取方法及装置
CN108882004B (zh) 视频录制方法、装置、设备及存储介质
US20210051276A1 (en) Method and apparatus for providing video in portable terminal
CN112188269B (zh) 视频播放方法和装置以及视频生成方法和装置
TWI564822B (zh) 可預先篩選之視訊檔案回放系統及其方法與電腦程式產品
CN112948627B (zh) 一种报警视频生成方法、显示方法和装置
CN113596582A (zh) 一种视频预览方法、装置及电子设备
US9413960B2 (en) Method and apparatus for capturing video images including a start frame
WO2022061723A1 (zh) 一种图像处理方法、设备、终端及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2012878638

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12878638

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015549913

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20157017260

Country of ref document: KR

Kind code of ref document: A