EP2741237B1 - Procédé, appareil et système de mise en oeuvre d'une occlusion de vidéo - Google Patents

Procédé, appareil et système de mise en oeuvre d'une occlusion de vidéo Download PDF

Info

Publication number
EP2741237B1
EP2741237B1 EP12872312.9A EP12872312A EP2741237B1 EP 2741237 B1 EP2741237 B1 EP 2741237B1 EP 12872312 A EP12872312 A EP 12872312A EP 2741237 B1 EP2741237 B1 EP 2741237B1
Authority
EP
European Patent Office
Prior art keywords
video data
masked
masked area
video
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12872312.9A
Other languages
German (de)
English (en)
Other versions
EP2741237A1 (fr
EP2741237A4 (fr
Inventor
Duanling SONG
Feng Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP2741237A1 publication Critical patent/EP2741237A1/fr
Publication of EP2741237A4 publication Critical patent/EP2741237A4/fr
Application granted granted Critical
Publication of EP2741237B1 publication Critical patent/EP2741237B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19686Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates

Definitions

  • Embodiments of the present invention relate to the field of video surveillance, and in particular, to a method, an apparatus, and a system for implementing video mask.
  • encryption processing is performed for image data of a masked part in a video, and the processed video is sent to a monitoring terminal.
  • a user with permission is capable of decrypting the image data of the masked part in the received video to see the complete video, while a user with no permission cannot see the image of the masked part.
  • a terminal of the user with no permission is also capable of acquiring the image data of the masked part, and if an abnormal means is used to decrypt the data of the part, the image of the masked part can be seen. This causes a security risk.
  • the document, WO 2006070249A1 relates to a video surveillance system which addresses the issue of privacy rights and scrambles regions of interest in a scene in a video scene to protect the privacy of human faces and objects captured by the system.
  • the video surveillance system is configured to identify persons and or objects captured in a region of interest of a video scene by various techniques, such as detecting changes in a scene or by face detection.
  • the document, US 20100149330A1 relates to a system and method for operator-side privacy zone masking of surveillance.
  • the system includes a video surveillance camera equipped with a coordinate engine for determining coordinates of a current field of view of the surveillance camera; and a frame encoder for embedding the determined coordinates with video frames of the current field of view.
  • Embodiments of the present invention provide a method, an apparatus, and a system for implementing video mask, so as to solve a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • the peripheral unit 110 is configured to collect video data and send the collected video data to the monitoring platform through the transmission network.
  • the peripheral unit 110 may generate, according to set description information of a masked area, non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area and separately transmit them to the monitoring platform.
  • Presentation forms of hardware of the peripheral unit 110 may be all types of camera devices, for example, webcams such as a dome camera, a box camera, and a semi-dome camera, and for another example, an analog camera and an encoder.
  • the monitoring platform 120 is configured to receive the masked video data and the non-masked video data that are sent by the peripheral unit 110, or obtain masked video data and non-masked video data by separating complete video data received from the peripheral unit 110, and send corresponding video data to the monitoring terminal 130 according to permission of a user of the monitoring terminal. For a user that has permission to acquire the masked video data, the monitoring platform 120 may send the masked video data and the non-masked video data to the monitoring terminal for merging and playing; alternatively, the monitoring platform 120 may merge the masked video data and the non-masked video data and send them to the monitoring terminal for playing.
  • the monitoring terminal 130 is configured to receive the video data sent by the monitoring platform, and if the received video data includes the non-masked video data and the masked video data, further configured to merge and play the masked video data and the non-masked video data.
  • FIG. 2 is a schematic flowchart of a method for implementing video mask according to a first embodiment of the present invention.
  • Step 210 Receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the masked video data and the non-masked video data may specifically be encoded by using an H.264 format.
  • the device identifier is used to uniquely identify the peripheral unit, and specifically, it may include an identifier of a camera of the peripheral unit, and may further include an identifier of a cloud mirror of the peripheral unit.
  • Step 220 Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like. If one area is included, the masked video data corresponding to the masked area may specifically include one channel of video data. If multiple areas are included, the masked video data corresponding to the masked area may specifically include one channel of video data, or may include multiple channels of video data, for example, each area included in the masked area corresponds to one channel of video data.
  • description information of the masked area may be used to describe the masked area.
  • the description information of the masked area specifically includes a coordinate of the masked area.
  • the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • overall permission control may be performed for the masked video data, that is, permission to access the masked video data is classified into two permission levels: having access permission and having no access permission. In this case, it can be directly determined whether a user has permission to access the masked video data.
  • the first masked video data is the masked video data
  • the first masked area is the masked area (that is, the whole area of the masked area is included).
  • area-based permission control may also be performed for the masked video data. Respective permission is set for different areas, that is, video data that corresponds to different areas may correspond to different permission.
  • the masked area includes three areas, area 1 and area 2 correspond to permission A, and area 3 corresponds to permission B.
  • the masked area includes three areas, area 1 corresponds to permission A, area 2 corresponds to permission B, and area 3 corresponds to permission C. In this case, it needs to determine whether the user has permission to access masked video data that corresponds to a specific area.
  • the permission may be determined according to a password. For example, if a password that is received from the first monitoring terminal and used to acquire the first masked video data is determined to be correct (that is, a user inputs a correct password), it is determined that the user has the permission to acquire the first masked video data.
  • the permission may be further determined according to a user identifier of a user of the first monitoring terminal.
  • a user identifier may be preconfigured, and if the user identifier matches the authorized user identifier, it is determined that the user has the permission to acquire the first masked video data; an authorized account type may also be preconfigured, and if an account type corresponding to the user identifier matches the authorized account type, it is determined that the user has the permission to acquire the first masked video data.
  • the user identifier may be acquired during login of the user performed by using the monitoring terminal.
  • the video request received in step 210 may carry the user identifier, and in this case, the user identifier carried in the video request may be acquired.
  • step 230A If a determined result is yes, perform step 230A. If the determined result is no, perform step 230B.
  • Step 230A Acquire the first masked video data and the non-masked video data; and send the first masked video data and the non-masked video data to the first monitoring terminal, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data and send the merged video data to the first monitoring terminal.
  • a data type of the masked video data may also be sent to the first monitoring terminal, so that the first monitoring terminal identifies the masked video data from the received video data.
  • the data type may specifically be included in an acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the data type may be included in a message that is sent to the first monitoring terminal and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the first monitoring terminal and a monitoring platform, where the media channel is used to transmit the masked video data.
  • the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A.
  • the description information may be included in the acquiring address (for example, a URL) that is sent to the first monitoring terminal and used to acquire the masked video data, or the description information may be included in the message that is sent to the first monitoring terminal and used to carry the acquiring address, or the description information may be sent in the process of establishing the media channel used to transmit the masked video data.
  • Step 230B Acquire the non-masked video data and send it to the first monitoring terminal.
  • step 230A and step 230B are as follows:
  • the acquiring address (for example, a URL) sent to the first monitoring terminal carries a data type.
  • the data type is used to indicate that the video data that can be acquired according to the acquiring address is the non-masked video data or the masked video data. Examples of a format of a URL (universal resource locator, Universal Resource Locator) that carries the data type are as follows:
  • description information for example, a coordinate of a masked area
  • description information of the masked area corresponding to the masked video data may be further carried in the acquiring address of the masked video data.
  • Examples of a format of a URL that carries the data type and the description information of the masked area are as follows:
  • the monitoring platform may further send the data type and/or the description information of the masked area to the first monitoring terminal by message exchange.
  • the data type and/or the description information of the masked area is included in a message body of an XML structure in a message that carries the URL, as shown in the following:
  • a user-defined structure body in an RTSP ANNOUNCE message may also be used to carry the data type and/or the description information of the masked area in the process of establishing the media channel between the first monitoring terminal and the monitoring platform.
  • An example is shown as follows:
  • step 230A the acquiring the first masked video data and the non-masked video data, merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal specifically includes: generating an acquiring address (for example, a URL) used to acquire the merged video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the merged video data, acquiring and merging the first masked video data and the non-masked video data, and sending the merged video data to the first monitoring terminal through the media channel.
  • an acquiring address for example, a URL
  • Step 230B may include: generating an acquiring address of the non-masked video data and sending it to the first monitoring terminal, receiving a request that is sent by the first monitoring terminal and includes the acquiring address, establishing, with the first monitoring terminal according to the acquiring address, a media channel used to send the non-masked video data, acquiring the non-masked video data according to the acquiring address of the non-masked video data and sending the non-masked video data through the media channel.
  • a CU (Client Unit, client unit) in this implementation manner is client software installed on a monitoring terminal and provides monitoring personnel with functions such as real-time video surveillance, video query and playback, and a cloud mirror operation.
  • a monitoring platform includes an SCU (Service Control Unit, service control unit) and an MU (Media Unit, media unit).
  • SCU Service Control Unit, service control unit
  • MU Media Unit, media unit
  • the SCU and the MU may be implemented in a same universal server or dedicated server, or may be separately implemented in different universal servers or dedicated servers.
  • Step 301 A CU sends a video request to an SCU of a monitoring platform, where the video request includes a device identifier and is used to request video data of a peripheral unit identified by the device identifier, and the video data includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • Step 302 The SCU determines whether a user of the CU has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • step 302 is the same as that of step 220, and therefore no further details are provided herein.
  • steps 303A-312A are performed. In this implementation manner, it is assumed that the first masked video data includes one channel of video data.
  • steps 303B-308B are performed.
  • Steps 303A-306A The SCU requests a URL of the first masked video data and a URL of the non-masked video data from an MU, and the MU generates the URL of the first masked video data and the URL of the non-masked video data and returns them to the SCU.
  • Step 307A The SCU returns the URL of the first masked video data and the URL of the non-masked video data to the CU.
  • Steps 308A-309A The CU requests the first masked video data from the MU according to the URL of the first masked video data, establishes, with the MU, a media channel used to transmit the first masked video data, and receives, through the media channel, the first masked video data sent by the MU.
  • Steps 310A-311A The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 312A The CU merges and plays the first masked video data and the non-masked video data.
  • Steps 303B-304B The SCU requests a URL of the non-masked video data from the MU, and the MU generates the URL of the non-masked video data and returns it to the SCU.
  • Step 305B The SCU returns the URL of the non-masked video data to the CU.
  • Steps 306B-307B The CU requests the non-masked video data from the MU according to the URL of the non-masked video data, establishes, with the MU, a media channel used to transmit the non-masked video data, and receives, through the media channel, the non-masked video data sent by the MU.
  • Step 308B The CU plays the non-masked video data.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • the first embodiment of the present invention not only can be used in a real-time video surveillance scenario, but also can be used in a video view scenario (for example, video playback and video downloading). If the first embodiment is used in the video view scenario, the acquiring non-masked video data in steps 230A and 230B is specifically reading the non-masked video data from a non-masked video file, and the acquiring masked video data in step 230A is specifically reading masked video data from a masked video file.
  • step 210 the following operations are performed:
  • the establishing an association between the masked video file and the non-masked video file specifically includes: recording a non-masked video index and a masked video index, and establishing an association between the non-masked video index and the masked video index, where the non-masked video index includes a device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file (for example, a storage address of the non-masked video file, which may specifically be an absolute path of the non-masked video file), and the indication information of the non-masked video data is used to indicate that the non-masked video index is an index of the non-masked video file; and the masked video index includes indication information of the masked video data and an identifier of the masked video file (for example, a storage address of the masked video file, which may specifically be an absolute path of the masked video file), and the indication information of the masked video data is
  • both the non-masked video index and the masked video index may include indication information of a non-independent index, where the indication information of the non-independent index is used to indicate an index associated with the index.
  • the indication information of the non-independent index of the non-masked video index is used to indicate a masked video index associated with the non-masked video index.
  • the non-masked video index and/or the masked video index may further include description information of a masked area, or information (for example, a storage address of the description information of the masked area) used to acquire the description information of the masked area.
  • the establishing an association between the non-masked video index and the masked video index may specifically include recording an identifier (for example, an index number) of the masked video index into the non-masked video index, or may further include recording an identifier (for example, an index number) of the non-masked video index into the masked video index, or may further include recording an association between the identifier of the masked video index and the identifier of the non-masked video index. It should be noted that if the masked video data includes multiple channels of video data, a masked video index may be established for each channel of video data, and an association is established between the non-masked video index and each masked video index.
  • description information of a masked area corresponding to the video file, or information used to acquire the description information of the masked area corresponding to the video file is recorded in each masked video index.
  • the video request sent in step 210 may further include view time.
  • the acquiring the non-masked video data is specifically acquiring video data corresponding to the view time from the non-masked video file, and may specifically include: acquiring the non-masked video index according to the identifier of the peripheral unit, the view time, and the indication information of the non-masked video data, acquiring the non-masked video file according to the identifier of the non-masked video file in the non-masked video index, and acquiring the non-masked video data corresponding to the view time from the non-masked video file.
  • the acquiring the masked video data is specifically acquiring, according to the association between the masked video file and the non-masked video file, one or more video files that are associated with the non-masked video file and correspond to the first masked area and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area, and specifically includes: acquiring, according to the association between the non-masked video index and the masked video index (for example, according to the identifier of the masked video index in the non-masked video index), the masked video index associated with the non-masked video index, acquiring, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquiring the video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • the masked video index associated with the non-masked video index may be further determined according to the indication information of the non-independent index in the non-masked video index, so as to improve efficiency of the monitoring platform in retrieving the masked video index.
  • an acquiring address used to acquire the non-masked video data may be generated according to the non-masked video index and sent to a first monitoring terminal, a request that is sent by the first monitoring terminal and includes the acquiring address of the non-masked video data is received, a media channel used to send the non-masked video data is established with the first monitoring terminal according to the acquiring address of the non-masked video data, the non-masked video data is acquired according to the acquiring address of the non-masked video data, and the non-masked video data is sent through the media channel. For example, as shown in FIG.
  • the SCU of the monitoring platform acquires the non-masked video index after receiving the video request, requests, from the MU according to the non-masked video index, a URL used to acquire the non-masked video data corresponding to the non-masked video index, and sends the URL to the CU.
  • the MU receives the request that is sent by the CU and includes the URL, establishes, with the CU according to the URL, a media channel used to send the non-masked video data, reads the non-masked video data in the video file according to the URL, and sends the non-masked video data to the CU through the media channel.
  • a process of sending the masked video data after the masked video index is acquired is similar to a process of sending the non-masked video data after the non-masked video index is acquired, and therefore no further details are provided herein.
  • the method may further include: sending description information of the first masked area to the first monitoring terminal, so that the first monitoring terminal merges and plays, according to the description information of the first masked area, the first masked video data and the non-masked video data that are received in step 230A.
  • the method may specifically include: acquiring the non-masked video index or description information of a masked area that is included in a masked video index corresponding to the first masked video data, or acquiring the description information of the first masked area according to the non-masked video index or information that is included in the masked video index and used to acquire the description information of the first masked area, and sending the acquired description information of the first masked area to the first monitoring terminal.
  • the description information of the first masked area may be carried in a message that is sent to the first monitoring terminal and carries an acquiring address of the first masked video data.
  • the method further includes receiving a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes a device identifier of the peripheral unit and the description information of the masked area.
  • the description information of the masked area may be sent to the peripheral unit, and the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area are received; or the masked video data and the non-masked video data may be obtained by separating, according to the description information of the masked area, complete video data received from the peripheral unit.
  • the masked video data and the non-masked video data may be sent to the first monitoring terminal and be merged and played by the first monitoring terminal, or the masked video data and the non-masked video data may be merged and then sent to the first monitoring terminal.
  • first monitoring terminal and the second monitoring terminal may be a same monitoring terminal.
  • an entity generating the non-masked video data and the masked video data may be a peripheral unit or a monitoring platform
  • an entity merging the non-masked video data and the masked video data may be a monitoring platform or a monitoring terminal (that is, the first monitoring terminal in the first embodiment of the present invention).
  • a first exemplary implementation manner is as follows: As shown in FIG. 4 , the peripheral unit generates the non-masked video data and the masked video data, the monitoring platform separately sends the monitoring terminal (for example, the first monitoring terminal in this embodiment) the non-masked video data and the masked video data (for example, the first masked video data in this embodiment) that a user has permission to acquire, and the monitoring terminal merges and plays the received video data.
  • the monitoring terminal for example, the first monitoring terminal in this embodiment
  • the masked video data for example, the first masked video data in this embodiment
  • Step 401 A second monitoring terminal sends a masked area setting request to a monitoring platform, where the masked area setting request includes a device identifier and description information of a masked area.
  • the masked area may specifically include one or more areas, where the area may be rectangular, circular, polygonal, and the like.
  • the description information of the masked area specifically includes a coordinate of the masked area.
  • the description information of the masked area may include coordinates of at least three vertexes of the rectangle, or may only include a coordinate of one vertex of the rectangle and a width and a height of the rectangle, for example (x, y, w, h), where x is the horizontal coordinate of the upper left vertex, y is the vertical coordinate of the upper left vertex, w is the width, and h is the height.
  • Step 402 The monitoring platform sends the masked area setting request to a peripheral unit identified by the device identifier, where the masked area setting request includes the description information of the masked area.
  • Step 403 The peripheral unit encodes a captured video picture to generate masked video data and non-masked video data.
  • the peripheral unit encodes the captured video picture into the non-masked video data corresponding to a non-masked area and the masked video data corresponding to the masked area. If the masked area includes one area, a video picture corresponding to the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data.
  • video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures corresponding to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • the video picture corresponding to the masked area may be directly encoded into the masked video data, that is, a video data frame of the masked video data includes only pixel data of the video picture corresponding to the masked area; or a video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both pixel data of the video picture corresponding to the masked area and filled pixel data.
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG, and the like.
  • the video picture corresponding to the non-masked area may be directly encoded into the non-masked video data, or the video picture in the whole captured video picture may be encoded after filling the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • timestamps of video data frames corresponding to a same complete video picture are kept completely consistent in the masked video data and the non-masked video data.
  • the description information of the masked area is sent by the monitoring platform to the peripheral unit.
  • the description information of the masked area may be preset on the peripheral unit.
  • Step 404 Send the generated masked video data and non-masked video data to the monitoring platform.
  • the peripheral unit may further send a data type of the masked video data to the monitoring platform, so that the monitoring platform identifies the masked video data from received video data.
  • the data type may be specifically included in an acquiring address (for example, a URL) that is sent to the monitoring platform and used to acquire the masked video data (where the monitoring platform may acquire the masked video data from the peripheral unit by using the acquiring address), or the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • an acquiring address for example, a URL
  • the data type may be included in a message that is sent to the monitoring platform and used to carry the acquiring address, or the data type may be sent in a process of establishing a media channel between the monitoring platform and the peripheral unit and used to transmit the masked video data.
  • Step 405 A first monitoring terminal sends a video request to the monitoring platform, where the video request includes the device identifier of the peripheral unit.
  • Step 406 Determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • steps 407A-409A are performed.
  • steps 407B-408B are performed.
  • Step 407A The monitoring platform sends description information of the first masked area to the first monitoring terminal.
  • Step 408A The monitoring platform sends the first masked video data and the non-masked video data to the first monitoring terminal.
  • Step 409A The first monitoring terminal merges and plays the received first masked video data and non-masked video data.
  • the received first masked video data and non-masked video data are merged and played according to the description information of the first masked area.
  • the first masked video data includes one channel of video data
  • the first masked video data is decoded to obtain a masked video data frame
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in the masked video data frame is extracted
  • the extracted pixel data is added, according to the description information of the first masked area, to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame.
  • a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of the pixel area in the masked video data frame is extracted according to the description information of the first masked area, where the pixel area corresponds to the first masked area.
  • the first masked video data includes multiple channels of video data
  • each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and the complete video data frame is played.
  • both the non-masked video data and the first masked video data are transmitted to the first monitoring terminal through the RTP protocol.
  • the first monitoring terminal receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol, parses the non-masked video data code stream and the first masked video data code stream to obtain the non-masked video data and the first masked video data respectively, and separately caches the non-masked video data and the first masked video data in a decoder buffer area.
  • Frame data is synchronized according to a synchronization timestamp, that is, frame data that has a same timestamp is separately extracted from the non-masked video data and the first masked video data.
  • the extracted frame data of the non-masked video data and the extracted frame data of the first masked video data that have the same timestamp are separately encoded to generate corresponding YUV data.
  • YUV data of the first masked video data and YUV data of the non-masked video data are merged according to the description information of the first masked area, and the merged YUV data is rendered and played.
  • a request for acquiring video data is sent to the peripheral unit after step 404.
  • video data that a user of the first monitoring terminal has permission to acquire may be requested from the peripheral unit according to the determined result in step 406. For example, if the user only has permission to acquire the non-masked video data, only the non-masked video data is requested; and if the user has permission to acquire the non-masked video data and the first masked video data, the non-masked video data and the first masked video data is requested.
  • the peripheral unit After receiving the request, the peripheral unit generates the requested video data and returns it to the monitoring platform.
  • a method used by the peripheral unit to generate the non-masked video data and the first masked video data is the same as that in step 403, and therefore no further details are provided.
  • Step 407B The monitoring platform forwards the non-masked video data to the first monitoring terminal.
  • Step 408B The first monitoring terminal plays the received non-masked video data.
  • a second exemplary implementation manner 2 is as follows: As shown in FIG. 7 , the peripheral unit generates the non-masked video data and the masked video data, and the monitoring platform merges the non-masked video data and the masked video data (that is, the first masked video data) that a user has permission to acquire and then sends them to the monitoring terminal.
  • Steps 501-506 are the same as steps 401-406, and therefore no further details are provided.
  • steps 507A-510A are performed.
  • steps 507B-508B are performed.
  • Step 507A is the same as step 407A.
  • Step 508A The monitoring platform merges the non-masked video data and the first masked video data.
  • the first masked video data and the non-masked video data are merged according to the description information of the masked area received in step 501.
  • the first masked video data includes one channel of video data
  • the first masked video data is decoded to obtain a masked video data frame
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in the masked video data frame is extracted
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area
  • the complete video data frame is encoded to obtain the merged video data.
  • the extracting the pixel data in the masked video data frame is specifically extracting all pixel data in the masked video data frame.
  • a video picture in the whole captured video picture is encoded after filling the video picture by using a set pixel value so as to generate the first masked video data during the encoding, where the video picture corresponds to the non-masked area, that is, a video data frame of the non-masked video data includes both the pixel data of the video picture corresponding to the masked area and the filled pixel data, pixel data of a pixel area in the masked video data frame is extracted, where the pixel area corresponds to the first masked area.
  • the first masked video data includes multiple channels of video data
  • each channel of video data in the first masked video data is decoded to obtain a masked video data frame of the channel of video data
  • the non-masked video data is decoded to obtain a non-masked video data frame
  • pixel data in masked video data frames of all channels of video data is extracted, where the masked video data frames have a same timestamp
  • the extracted pixel data is added to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the masked area
  • the complete video data frame is encoded to obtain the merged video data.
  • both the non-masked video data and the first masked video data are transmitted to the monitoring platform through the RTP protocol.
  • Processing after the monitoring platform receives a non-masked video data code stream and a first masked video data code stream that are encapsulated through the RTP protocol is similar to the processing after the first monitoring terminal receives a code stream in step 409A.
  • a difference lies only in that the first monitoring terminal renders and plays YUV data after merging the YUV data, while the monitoring platform encodes merged YUV data after merging the YUV data, so as to generate the merged video data.
  • Step 509A Send the merged video data to the first monitoring terminal.
  • Step 510A The first monitoring terminal directly decodes and plays the merged video data.
  • Steps 507B-508B are the same as steps 407B-408B.
  • a third exemplary implementation manner is as follows: As shown in FIG. 10 , the peripheral unit generates complete video data, the monitoring platform obtains the masked video data and the non-masked video data by separating the complete video data received from the peripheral unit, and separately sends the monitoring terminal the non-masked video data and the masked video data that a user has permission to acquire, and the monitoring terminal merges and plays the received masked video data and non-masked video data.
  • Step 601 is the same as step 401, and therefore no further details are provided.
  • Step 602 The peripheral unit encodes a captured video picture into complete video data and sends the complete video data to the monitoring platform.
  • Step 603 The monitoring platform obtains the masked video data corresponding to a masked area and the non-masked video data corresponding to a non-masked area by separating, according to the description information of a masked area received in step 601, the complete video data.
  • a video picture in the complete video data may be encoded into one channel of video data, that is, the masked video data includes one channel of video data, where the video picture corresponds to the masked area.
  • video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data, that is, the masked video data includes one channel of video data; or video pictures in the complete video data that correspond to the multiple areas included in the masked area may be encoded into one channel of video data each, that is, the masked video data includes multiple channels of video data and each area corresponds to one channel of video data; or video pictures corresponding to areas with same permission among the multiple areas included in the masked area may be encoded into one channel of video data, that is, the areas corresponding to the same permission correspond to a same channel of video data, for example, if the masked area includes three areas, area 1 and area 2 correspond to same permission, and area 3 corresponds to another permission, video pictures corresponding to area 1 and area 2 are encoded into a same channel of video data, and a video picture corresponding to area 3 is encoded into another channel of video data.
  • the video picture corresponding to the masked area may be directly encoded into the masked video data. This includes: decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate a video data frame of the masked video data, where the video picture corresponds to the masked area.
  • a video picture in the whole captured video picture may also be encoded after filling the video picture by using a set pixel value so as to generate the masked video data, where the video picture corresponds to the non-masked area.
  • the obtaining the non-masked video data corresponding to a non-masked area may specifically be directly encoding the video picture corresponding to the non-masked area into the non-masked video data, which includes decoding the complete video data to obtain a complete video data frame and extracting pixel data of the video picture in the complete video data frame to generate the video data frame of the non-masked video data, where the video picture corresponds to the non-masked area; or may specifically be encoding the video picture in the whole video picture after filing the video picture by using a set pixel value so as to generate the non-masked video data, where the video picture corresponds to the masked area, which includes: decoding the complete video data to obtain a complete video data frame and setting a pixel value of a pixel of a pixel area in the complete video data frame as the set pixel value, where the pixel area corresponds to the masked area, and the set pixel value is preferably RGB (0, 0, 0).
  • Encoding formats include but are not limited to H.264, MPEG4, and MJPEG.
  • Steps 604-605 are the same as steps 405-406.
  • steps 606A-608A are performed.
  • steps 606B-607B are performed.
  • Steps 606A-608A are the same as steps 407A-409A.
  • Steps 606B-607B are the same as steps 407B-408B.
  • a second embodiment of the present invention provides a monitoring platform 500.
  • the monitoring platform includes a video request receiving unit 501, a determining unit 502, an acquiring unit 503, and a video data sending unit 504.
  • the video request receiving unit 501 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the determining unit 502 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the acquiring unit 503 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 502 is yes.
  • the video data sending unit 504 is configured to: when the determined result of the determining unit 502 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 503, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 503 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 502 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 503.
  • the monitoring platform further includes a setting request receiving unit 505.
  • the setting request receiving unit 505 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • the monitoring platform further includes a description information sending unit 506 and a first video data receiving unit 507.
  • the description information sending unit 506 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 507 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • the monitoring platform further includes a second video data receiving unit 508 and a video data separating unit 509.
  • the second video data receiving unit 508 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 509 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • the monitoring platform further includes a storing unit and an association establishing unit.
  • the storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • the association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • the video request receiving unit 501 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • the acquiring unit 503 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 502 is yes.
  • the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • the acquiring unit 503 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 502 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • a functional unit described in the second embodiment of the present invention can be used to implement the method described in the first embodiment.
  • the video request receiving unit 501, the determining unit 502, the setting request receiving unit 505, and the description information sending unit 506 are located on an SCU of the monitoring platform, and the acquiring unit 503, the video data sending unit 504, the first video data receiving unit 507, the second video data receiving unit 508, and the video data separating unit 509 are located on an MU of the monitoring platform.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • a third embodiment of the present invention provides a monitoring terminal 600.
  • the monitoring terminal includes a video request sending unit 601, a video data receiving unit 602, and a playing unit 603.
  • the video request sending unit 601 is configured to send a video request to a monitoring platform, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the video data receiving unit 602 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • the playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 602, or configured to play the non-masked video data received by the video data receiving unit 602.
  • the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • a functional unit described in the third embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a fourth embodiment of the present invention provides a peripheral unit 700.
  • the peripheral unit includes a description information receiving unit 701, a video data encoding unit 702, and a video data sending unit 703.
  • the description information receiving unit 701 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform.
  • the video data encoding unit 702 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • the video data sending unit 703 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the video data encoding unit 702 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • a functional unit described in the fourth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a fifth embodiment of the present invention provides a monitoring platform 1000, including:
  • the processor 1010, the communications interface 1020, and the memory 1030 complete communication between each other through the bus 1040.
  • the communications interface 1020 is configured to communicate with a network element, for example, communicate with a monitoring terminal or a peripheral unit.
  • the processor 1010 is configured to execute a program 1032.
  • the program 1032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 1010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 1030 is configured to store the program 1032.
  • the memory 1030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 1032 may specifically include a video request receiving unit 1032-1, a determining unit 1032-2, an acquiring unit 1032-3, and a video data sending unit 1032-4.
  • the video request receiving unit 1032-1 is configured to receive a video request sent by a first monitoring terminal, where the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the determining unit 1032-2 is configured to determine whether a user of the first monitoring terminal has permission to acquire first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the acquiring unit 1032-3 is configured to acquire the non-masked video data and configured to acquire the first masked video data when a determined result of the determining unit 1032-2 is yes.
  • the video data sending unit 1032-4 is configured to: when the determined result of the determining unit 1032-2 is yes, send the first monitoring terminal the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3, so that the first monitoring terminal merges and plays the first masked video data and the non-masked video data, or merge the first masked video data and the non-masked video data that are acquired by the acquiring unit 1032-3 to obtain merged video data, and send the merged video data to the first monitoring terminal; and further configured to: when the determined result of the determining unit 1032-2 is no, send the first monitoring terminal the non-masked video data acquired by the acquiring unit 1032-3.
  • the program further includes a setting request receiving unit 1032-5.
  • the setting request receiving unit 1032-5 is configured to receive a masked area setting request sent by a second monitoring terminal, where the masked area setting request includes the device identifier of the peripheral unit and description information of the masked area.
  • the monitoring platform further includes a description information sending unit 1032-6 and a first video data receiving unit 1032-7.
  • the description information sending unit 1032-6 is configured to send the description information of the masked area to the peripheral unit; and the first video data receiving unit 1032-7 is configured to receive the non-masked video data and the masked video data that are sent by the peripheral unit and generated according to the description information of the masked area.
  • the monitoring platform further includes a second video data receiving unit 1032-8 and a video data separating unit 1032-9.
  • the second video data receiving unit 1032-8 is configured to receive complete video data sent by the peripheral unit; and the video data separating unit 1032-9 is configured to obtain the masked video data and the non-masked video data by separating the complete video data received by the second video data receiving unit.
  • the program further includes a storing unit and an association establishing unit.
  • the storing unit is configured to store the masked video data into a masked video file and store the non-masked video data into a non-masked video file, and the masked video file includes one or more video files.
  • the association establishing unit is configured to establish an association between the masked video file and the non-masked video file.
  • the video request receiving unit 1032-1 is specifically configured to receive a video request that includes view time and is sent by the first monitoring terminal.
  • the acquiring unit 1032-3 is specifically configured to acquire video data corresponding to the view time from the non-masked video file, and further specifically configured to acquire, according to the association established by the association establishing unit, one or more video files that correspond to the first masked area and are associated with the non-masked video file and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area when the determined result of the determining unit 1032-2 is yes.
  • the association establishing unit is specifically configured to record a non-masked video index and a masked video index and establish an association between the non-masked video index and the masked video index, where the non-masked video index includes the device identifier of the peripheral unit, video start time and end time, indication information of the non-masked video data, and an identifier of the non-masked video file, and the masked video index includes indication information of the masked video data and an identifier of the masked video file.
  • the acquiring unit 1032-3 is specifically configured to obtain, through matching, the non-masked video index according to the device identifier of the peripheral unit and the view time that are included in the video request and the indication information of the non-masked video data, the device identifier of the peripheral unit, and the video start time and end time that are included in the non-masked video index, acquire the non-masked video file according to the identifier of the non-masked video file included in the non-masked video index, and acquire the video data corresponding to the view time from the non-masked video file; and further specifically configured to acquire, when the determined result of the determining unit 1032-2 is yes, the masked video index associated with the non-masked video index according to the association, acquire, according to the identifier of the masked video file included in the masked video index, one or more video files corresponding to the first masked area, and acquire video data corresponding to the view time from the one or more video files corresponding to the first masked area.
  • each unit in the program 1032 refers to a corresponding unit in the second embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the fifth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • a sixth embodiment of the present invention provides a monitoring terminal 2000, including:
  • the processor 2010, the communications interface 2020, and the memory 2030 complete communication between each other through the bus 2040.
  • the communications interface 2020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • the processor 2010 is configured to execute a program 2032.
  • the program 2032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 2010 is configured to perform a computer program stored in the memory and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 2030 is configured to store the program 2032.
  • the memory 2030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 2032 may specifically include a video request sending unit 2032-1, a video data receiving unit 2032-2, and a playing unit 2032-3.
  • the video request sending unit is configured to send a video request to a monitoring platform, the video request includes a device identifier, and video data of a peripheral unit identified by the device identifier includes non-masked video data corresponding to a non-masked area and masked video data corresponding to a masked area.
  • the video data receiving unit 2032-2 is configured to receive first masked video data and the non-masked video data that are sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data in the masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area; and further configured to receive the non-masked video data that is sent by the monitoring platform when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data in the masked video data.
  • the playing unit is configured to merge and play the first masked video data and the non-masked video data that are received by the video data receiving unit 2032-2, or configured to play the non-masked video data received by the video data receiving unit 2032-2.
  • the playing unit is specifically configured to decode the first masked video data to obtain a masked video data frame, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in the masked video data frame, add, according to description information of the first masked area, the extracted pixel data to a pixel area in a non-masked video data frame that has a same timestamp as the masked video data frame so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • the playing unit is specifically configured to decode each channel of video data in the first masked video data to obtain a masked video data frame of the channel of video data, decode the non-masked video data to obtain a non-masked video data frame, extract pixel data in masked video data frames of all channels of video data, where the masked video data frames have a same timestamp, add the extracted pixel data to a pixel area in a non-masked video data frame that has the same timestamp as the masked video data frames so as to generate a complete video data frame, where the pixel area corresponds to the first masked area, and play the complete video data frame.
  • each unit in the program 2032 refers to a corresponding unit in the third embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the sixth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a seventh embodiment of the present invention provides a peripheral unit 3000, including:
  • the processor 3010, the communications interface 3020, and the memory 3030 complete communication between each other through the bus 3040.
  • the communications interface 3020 is configured to communicate with a network element, for example, communicate with a monitoring platform.
  • the processor 3010 is configured to execute a program 3032.
  • the program 3032 may include a program code, and the program code includes a computer operation instruction.
  • the processor 3010 is configured to perform a computer program stored in the memory, and may specifically be a central processing unit (CPU, central processing unit), which is a core unit of a computer.
  • CPU central processing unit
  • the memory 3030 is configured to store the program 3032.
  • the memory 3030 may include a high-speed RAM memory, or may further include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 3032 may specifically include a description information receiving unit 3032-1, a video data encoding unit 3032-2, and a video data sending unit 3032-3.
  • the description information receiving unit 3032-1 is configured to receive description information of a masked area, where the description information is sent by a monitoring platform;
  • the video data encoding unit 3032-2 is configured to encode, according to the description information of the masked area, a captured video picture into non-masked video data corresponding to a non-masked area and masked video data corresponding to the masked area.
  • the video data sending unit 3032-3 is configured to send the non-masked video data and the masked video data to the monitoring platform, so that the monitoring platform sends the non-masked video data and first masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has permission to acquire the first masked video data, or sends the non-masked video data to a monitoring terminal when the monitoring platform determines that a user of the monitoring terminal has no permission to acquire first masked video data, where the first masked video data corresponds to a first masked area, and the first masked area includes a part or all of the masked area.
  • the video data encoding unit 3032-2 is specifically configured to: when the masked area includes one area, encode a video picture in the captured video picture into one channel of video data according to the description information of the masked area, where the video picture corresponds to the masked area; or when the masked area includes multiple areas, encode video pictures in the captured video picture into one channel of video data according to the description information of the masked area, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data each, where the video pictures correspond to the multiple areas included in the masked area, or encode video pictures in the captured video picture into one channel of video data, where the video pictures correspond to areas with same permission among the multiple areas included in the masked area; and further specifically configured to encode a video picture in the captured video picture into the non-masked video data according to the description information of the masked area, where the video picture corresponds to the non-masked area.
  • each unit in the program 3032 refers to a corresponding unit in the fourth embodiment of the present invention, and therefore no further details are provided herein.
  • a functional unit described in the seventh embodiment of the present invention can be used to implement the method described in the first embodiment.
  • an eighth embodiment of the present invention provides a video surveillance system 4000.
  • the video surveillance system includes a monitoring terminal 4010 and a monitoring platform 4020.
  • the monitoring terminal 4010 is specifically the monitoring terminal according to the third or the sixth embodiment.
  • the monitoring platform 4020 is specifically the monitoring platform according to the second or the fifth embodiment.
  • the video surveillance system may further include a peripheral unit 4030, which is specifically the peripheral unit according to the fourth or the seventh embodiment.
  • a functional unit described in the eighth embodiment of the present invention can be used to implement the method described in the first embodiment.
  • a monitoring platform determines permission of a user of the monitoring terminal, sends, according to a determined result, only non-masked video data to a monitoring terminal of a user that has no permission to acquire masked video data, and sends the masked video data and the non-masked video data to a monitoring terminal of a user that has permission to acquire a part or all of the masked video data, so that the monitoring terminal merges and plays the masked video data and the non-masked video data, or sends video data merged from the masked video data and the non-masked video data, thereby solving a security risk problem resulting from sending image data of a masked part to terminals of users with different permission in the prior art.
  • area-based permission control may be implemented, that is, if the masked area includes multiple areas, permission may be set for each different area, and masked video data that corresponds to a part or all of an area and that a user has permission to acquire is sent to a monitoring terminal of the user according to the permission of the user, thereby implementing more accurate permission control.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
  • the functions When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or part of the technical solutions may be implemented in a form of a software product.
  • the computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiment of the present invention.
  • the foregoing storage medium includes: any medium that can store a program code, such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.
  • a program code such as a USB flash disk, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Claims (13)

  1. Procédé pour la mise en oeuvre d'un accès vidéo, dans lequel une plate-forme de surveillance communique avec un premier terminal de surveillance par l'intermédiaire d'un réseau d'émission, le procédé comprenant les étapes consistant à :
    recevoir, par la plate-forme de surveillance, une demande de vidéo envoyée par le premier terminal de surveillance, dans lequel la demande de vidéo contient un identifiant de dispositif, et des données vidéo d'une unité périphérique identifiée par l'identifiant de dispositif contiennent des données vidéo de zone non masquée correspondant à une zone non masquée et des données vidéo de zone masquée correspondant à une zone masquée dont différentes zones ont des permissions respectives ;
    déterminer, par la plate-forme de surveillance, si un utilisateur du premier terminal de surveillance a la permission d'acquérir des premières données vidéo de zone masquée dans les données vidéo de zone masquée, les premières données vidéo de zone masquée correspondant à une première zone masquée ayant une permission respective, et la première zone masquée comprenant une partie de la zone masquée ; et
    si un résultat de la détermination est oui, acquérir les premières données vidéo de zone masquée et les données vidéo de zone non masquée ; envoyer des informations de description de la première zone masquée au premier terminal de surveillance, puis envoyer les premières données vidéo de zone masquée et les données vidéo de zone non masquée au premier terminal de surveillance, les informations de description de la première zone masquée qui sont envoyées au premier terminal de surveillance étant utilisées par le premier terminal de surveillance pour fusionner les premières données vidéo de zone masquée et les données vidéo de zone non masquée, ou fusionner les premières données vidéo de zone masquée acquises et les données vidéo de zone non masquée acquises pour obtenir des données vidéo fusionnées d'après les informations de description de la première zone masquée et envoyer les données vidéo fusionnées au premier terminal de surveillance ; les informations de description de la première zone masquée provenant d'une demande de configuration de zone masquée qui est reçue d'un deuxième terminal de surveillance, et contenant une coordonnée de la première zone masquée ; et
    si un résultat de la détermination est non, acquérir les données vidéo de zone non masquée et envoyer les données vidéo de zone non masquée au premier terminal de surveillance, le procédé :
    avant la réception d'une demande de vidéo envoyée par un premier terminal de surveillance, comprenant les étapes consistant à :
    recevoir la demande de configuration de zone masquée envoyée par un deuxième terminal de surveillance, la demande de configuration de zone masquée contenant l'identifiant de dispositif de l'unité périphérique et des informations de description de la zone masquée ; et
    envoyer les informations de description de la zone masquée à l'unité périphérique, et recevoir les données vidéo de zone non masquée et les données vidéo de zone masquée qui sont envoyées par l'unité périphérique et générées d'après les informations de description de la zone masquée ; ou obtenir les données vidéo de zone masquée et les données vidéo de zone non masquée en séparant, d'après les informations de description de la zone masquée, les données vidéo complètes reçues de l'unité périphérique.
  2. Procédé selon la revendication 1, le procédé :
    avant l'acquisition des premières données vidéo de zone masquée et des données vidéo de zone non masquée, comprenant les étapes consistant à :
    stocker les données vidéo de zone masquée dans un fichier vidéo masqué, stocker les données vidéo de zone non masquée dans un fichier vidéo non masqué, et établir une association entre le fichier vidéo masqué et le fichier vidéo non masqué, le fichier vidéo masqué comprenant un ou plusieurs fichiers vidéo ;
    la demande de vidéo contenant une heure de visualisation ;
    l'acquisition des données vidéo de zone non masquée comprenant spécifiquement l'étape consistant à : acquérir des données vidéo correspondant à l'heure de visualisation depuis le fichier vidéo non masqué ; et
    l'acquisition des premières données vidéo de zone masquée comprenant spécifiquement les étapes consistant à : acquérir, en fonction de l'association, un ou plusieurs fichiers vidéo qui correspondent à la première zone masquée et qui sont associés au fichier vidéo non masqué, et acquérir des données vidéo correspondant à l'heure de visualisation depuis les un ou plusieurs fichiers vidéo correspondant à la première zone masquée.
  3. Procédé selon la revendication 2, dans lequel :
    l'établissement d'une association entre le fichier vidéo masqué et le fichier vidéo non masqué comprend spécifiquement les étapes consistant à :
    enregistrer un index vidéo non masqué et un index vidéo masqué, l'index vidéo non masqué contenant l'identifiant de dispositif de l'unité périphérique, l'heure de début de vidéo et l'heure de fin de vidéo, des informations d'indication des données vidéo de zone non masquée, et un identifiant du fichier vidéo non masqué, et l'index vidéo masqué contenant des informations d'indication des données vidéo de zone masquée et un identifiant du fichier vidéo masqué ; et établir une association entre l'index vidéo non masqué et l'index vidéo masqué ;
    l'acquisition des données vidéo de zone non masquée comprend spécifiquement les étapes consistant à : obtenir, par une mise en correspondance, l'index vidéo non masqué d'après l'identifiant de dispositif de l'unité périphérique et l'heure de visualisation qui sont contenus dans la demande de vidéo et les informations d'indication des données vidéo de zone non masquée, l'identifiant de dispositif de l'unité périphérique, et l'heure de début de vidéo et l'heure de fin de vidéo qui sont contenus dans l'index vidéo non masqué, acquérir le fichier vidéo non masqué d'après l'identifiant du fichier vidéo non masqué contenu dans l'index vidéo non masqué, et acquérir les données vidéo correspondant à l'heure de visualisation depuis le fichier vidéo non masqué ; et
    l'acquisition des premières données vidéo de zone masquée comprend spécifiquement les étapes consistant à : acquérir, d'après l'association, l'index vidéo masqué associé à l'index vidéo non masqué, acquérir, d'après l'identifiant du fichier vidéo masqué contenu dans l'index vidéo masqué, un ou plusieurs fichiers vidéo correspondant à la première zone masquée, et acquérir les données vidéo correspondant à l'heure de visualisation depuis les un ou plusieurs fichiers vidéo correspondant à la première zone masquée.
  4. Procédé selon la revendication 1, dans lequel :
    l'acquisition des premières données vidéo de zone masquée et des données vidéo de zone non masquée ; et l'envoi des premières données vidéo de zone masquée et des données vidéo de zone non masquée au premier terminal de surveillance comprennent spécifiquement les étapes consistant à :
    générer une adresse d'acquisition des données vidéo de zone non masquée et une adresse d'acquisition des premières données vidéo de zone masquée et envoyer les adresses d'acquisition au premier terminal de surveillance, dans lequel l'adresse d'acquisition des premières données vidéo de zone masquée ou un message transportant l'adresse d'acquisition des données vidéo de zone masquée contient un type de données qui est utilisé pour indiquer que les données vidéo correspondant à l'adresse d'acquisition sont des données vidéo de zone masquée ;
    recevoir une demande qui est envoyée par le premier terminal de surveillance et qui contient l'adresse d'acquisition des données vidéo de zone non masquée, établir, avec le premier terminal de surveillance, d'après l'adresse d'acquisition des données vidéo de zone non masquée, un canal multimédia utilisé pour envoyer les données vidéo de zone non masquée, acquérir les données vidéo de zone non masquée d'après l'adresse d'acquisition des données vidéo de zone non masquée, et envoyer les données vidéo de zone non masquée par l'intermédiaire du canal multimédia ; et
    recevoir une demande qui est envoyée par le premier terminal de surveillance et qui contient l'adresse d'acquisition des premières données vidéo de zone masquée, établir, avec le premier terminal de surveillance, d'après l'adresse d'acquisition des premières données vidéo de zone masquée, un canal multimédia utilisé pour envoyer les premières données vidéo de zone masquée, acquérir les premières données vidéo de zone masquée d'après l'adresse d'acquisition des premières données vidéo de zone masquée, et envoyer les premières données vidéo de zone masquée par l'intermédiaire du canal multimédia.
  5. Procédé pour la mise en oeuvre d'un accès vidéo, dans lequel une unité périphérique communique avec une plate-forme de surveillance par un réseau d'émission, le procédé comprenant les étapes consistant à :
    recevoir, par l'unité périphérique, des informations de description d'une zone masquée dont différentes zones ont des permissions respectives, les informations de description étant envoyées par la plate-forme de surveillance, et les informations de description de la zone masquée comprenant une coordonnée de la zone masquée ;
    coder, par l'unité périphérique, d'après les informations de description de la zone masquée, une image vidéo capturée en données vidéo de zone non masquée correspondant à une zone non masquée et en données vidéo de zone masquée correspondant à la zone masquée ; et
    envoyer, par l'unité périphérique, les données vidéo de zone non masquée et les données vidéo de zone masquée à la plate-forme de surveillance, afin que : la plate-forme de surveillance envoie les données vidéo de zone non masquée et des premières données vidéo de zone masquée à un terminal de surveillance lorsque la plate-forme de surveillance détermine qu'un utilisateur du terminal de surveillance a la permission d'acquérir les premières données vidéo de zone masquée, et envoie les données vidéo de zone non masquée à un terminal de surveillance lorsque la plate-forme de surveillance détermine qu'un utilisateur du terminal de surveillance n'a pas la permission d'acquérir les premières données vidéo de zone masquée, dans lequel les premières données vidéo de zone masquée correspondent à une première zone masquée et, ayant une permission respective, la première zone masquée comprend une partie de la zone masquée.
  6. Procédé selon la revendication 5, dans lequel :
    le codage, d'après les informations de description de la zone masquée, d'une image vidéo capturée en données vidéo de zone masquée correspondant à la zone masquée comprend spécifiquement les étapes consistant à :
    lorsque la zone masquée ne comprend qu'une zone, coder une image vidéo de l'image vidéo capturée en un canal de données vidéo, l'image vidéo correspondant à la zone masquée ; ou
    lorsque la zone masquée comprend plusieurs zones, coder les images vidéo de l'image vidéo capturée en un canal de données vidéo, les images vidéo correspondant aux multiples zones comprises dans la zone masquée, ou coder les images vidéo de l'image vidéo capturée en un canal de données vidéo pour chacune, les images vidéo correspondant aux multiples zones comprises dans la zone masquée, ou coder les images vidéo de l'image vidéo capturée en un canal de données vidéo, les images vidéo correspondant à des zones ayant la même permission parmi les multiples zones comprises dans la zone masquée.
  7. Procédé selon la revendication 5, dans lequel :
    le codage, d'après les informations de description de la zone masquée, d'une image vidéo capturée en données vidéo de zone masquée correspondant à la zone masquée comprend spécifiquement les étapes consistant à : coder directement une image vidéo de l'image vidéo capturée pour en faire les données vidéo de zone masquée, l'image vidéo correspondant à la zone masquée ; ou coder une image vidéo de l'image vidéo capturée après remplissage de l'image vidéo grâce à l'utilisation d'une valeur de pixel définie afin de générer les données vidéo de zone masquée, l'image vidéo correspondant à la zone non masquée ; et
    le codage, d'après les informations de description de la zone masquée, d'une image vidéo capturée en données vidéo de zone non masquée correspondant à la zone non masquée comprend spécifiquement les étapes consistant à : coder directement une image vidéo de l'image vidéo capturée pour en faire les données vidéo de zone non masquée, l'image vidéo correspondant à la zone non masquée ; ou coder une image vidéo de l'image vidéo capturée après remplissage de l'image vidéo grâce à l'utilisation d'une valeur de pixel définie afin de générer les données vidéo de zone non masquée, l'image vidéo correspondant à la zone masquée.
  8. Plate-forme de surveillance, la plate-forme de surveillance communiquant avec un premier terminal de surveillance par l'intermédiaire d'un réseau d'émission, la plate-forme de surveillance comprenant : une unité de réception de demande de vidéo, une unité de détermination, une unité d'acquisition et une unité d'envoi de données vidéo, dans laquelle :
    l'unité de réception de demande de vidéo est conçue pour recevoir une demande de vidéo envoyée par le premier terminal de surveillance, la demande de vidéo contenant un identifiant de dispositif, et les données vidéo d'une unité périphérique identifiée par l'identifiant de dispositif contenant des données vidéo de zone non masquée correspondant à une zone non masquée et des données vidéo de zone masquée correspondant à une zone masquée dont différentes zones ont des permissions respectives ;
    l'unité de détermination est conçue pour déterminer si un utilisateur du premier terminal de surveillance a la permission d'acquérir des premières données vidéo de zone masquée dans les données vidéo de zone masquée ; les premières données vidéo de zone masquée correspondant à une première zone masquée ayant une permission respective, et la première zone masquée comprenant une partie de la zone masquée ;
    l'unité d'acquisition est conçue pour acquérir les données vidéo de zone non masquée et conçue pour acquérir les premières données vidéo de zone masquée lorsqu'un résultat déterminé par l'unité de détermination est oui ; et
    l'unité d'envoi de données vidéo est conçue pour : lorsque le résultat déterminé par l'unité de détermination est oui, envoyer au premier terminal de surveillance les premières données vidéo de zone masquée et les données vidéo de zone non masquée qui sont acquises par l'unité d'acquisition, et des informations de description de la première zone masquée, les informations de description de la première zone masquée qui sont envoyées au premier terminal de surveillance étant utilisées par le premier terminal de surveillance pour fusionner et lire les premières données vidéo de zone masquée et les données vidéo de zone non masquée, ou fusionner les premières données vidéo de zone masquée et les données vidéo de zone non masquée qui sont acquises par l'unité d'acquisition pour obtenir des données vidéo fusionnées d'après les informations de description de la première zone masquée, et envoyer les données vidéo fusionnées au premier terminal de surveillance ; les informations de description de la première zone masquée provenant d'une demande de configuration de zone masquée qui est reçue d'un deuxième terminal de surveillance, et contenant une coordonnée de la première zone masquée ; et également conçue pour : lorsque le résultat déterminé par l'unité de détermination est non, envoyer au premier terminal de surveillance les données vidéo de zone non masquée acquises par l'unité d'acquisition :
    la plate-forme de surveillance comprenant également : une unité de réception de demande de configuration, une unité d'envoi d'informations de description et une première unité de réception de données vidéo ; l'unité de réception de demande de configuration est conçue pour recevoir une demande de configuration de zone masquée envoyée par un deuxième terminal de surveillance, la demande de configuration de zone masquée contenant un identifiant de dispositif de l'unité périphérique et des informations de description de la zone masquée ; l'unité d'envoi d'informations de description est conçue pour envoyer les informations de description de la zone masquée à l'unité périphérique ; et la première unité de réception de données vidéo est conçue pour recevoir les données vidéo de zone non masquée et les données vidéo de zone masquée qui sont envoyées par l'unité périphérique et générées d'après les informations de description de la zone masquée ; ou
    la plate-forme de surveillance comprenant également : une unité de réception de demande de configuration, une deuxième unité de réception de données vidéo et une unité de séparation de données vidéo ; l'unité de réception de demande de configuration est conçue pour recevoir une demande de configuration de zone masquée envoyée par un deuxième terminal de surveillance, la demande de configuration de zone masquée contenant un identifiant de dispositif de l'unité périphérique et des informations de description de la zone masquée ; la deuxième unité de réception de données vidéo est conçue pour recevoir des données vidéo complètes envoyées par l'unité périphérique ; et l'unité de séparation de données vidéo est conçue pour obtenir les données vidéo de zone masquée et les données vidéo de zone non masquée en séparant les données vidéo complètes reçues par la deuxième unité de réception de données vidéo.
  9. Plate-forme de surveillance selon la revendication 8, comprenant également : une unité de stockage et une unité d'établissement d'association, dans laquelle :
    l'unité de stockage est conçue pour stocker les données vidéo de zone masquée dans un fichier vidéo masqué et stocker les données vidéo de zone non masquée dans un fichier vidéo non masqué, le fichier vidéo masqué comprenant un ou plusieurs fichiers vidéo ;
    l'unité d'établissement d'association est conçue pour établir une association entre le fichier vidéo masqué et le fichier vidéo non masqué ;
    l'unité de réception de demande de vidéo est spécifiquement conçue pour recevoir une demande de vidéo qui comprend une heure de visualisation et qui est envoyée par le premier terminal de surveillance ; et
    l'unité d'acquisition est spécifiquement conçue pour acquérir des données vidéo correspondant à l'heure de visualisation depuis le fichier vidéo non masqué, et également spécifiquement conçue pour acquérir, selon l'association établie par l'unité d'établissement d'association, un ou plusieurs fichiers vidéo qui correspondent à la première zone masquée et qui sont associés au fichier vidéo non masqué, et acquérir des données vidéo correspondant à l'heure de visualisation depuis les un ou plusieurs fichiers vidéo correspondant à la première zone masquée lorsque le résultat déterminé par l'unité de détermination est oui.
  10. Unité périphérique, l'unité périphérique communiquant avec une plate-forme de surveillance par l'intermédiaire d'un réseau d'émission, l'unité périphérique comprenant : une unité de réception d'informations de description, une unité de codage de données vidéo et une unité d'envoi de données vidéo, dans laquelle :
    l'unité de réception d'informations de description est conçue pour recevoir des informations de description d'une zone masquée dont différentes zones ont des permissions respectives, les informations de description étant envoyées par la plate-forme de surveillance, les informations de description de la zone masquée contenant une coordonnée de la zone masquée ;
    l'unité de codage de données vidéo est conçue pour coder, d'après les informations de description de la zone masquée, une image vidéo capturée en données vidéo de zone non masquée correspondant à une zone non masquée et en données vidéo de zone masquée correspondant à la zone masquée ; et
    l'unité d'envoi de données vidéo est conçue pour envoyer les données vidéo de zone non masquée et les données vidéo de zone masquée à la plate-forme de surveillance.
  11. Unité périphérique selon la revendication 10, dans laquelle :
    l'unité de codage de données vidéo est spécifiquement conçue pour : coder les images vidéo de l'image vidéo capturée en un canal de données vidéo d'après les informations de description de la zone masquée, les images vidéo correspondant aux multiples zones comprises dans la zone masquée, ou coder les images vidéo de l'image vidéo capturée en un canal de données vidéo pour chacune, les images vidéo correspondant aux multiples zones comprises dans la zone masquée, ou coder les images vidéo de l'image vidéo capturée en un canal de données vidéo, les images vidéo correspondant aux zones ayant la même permission parmi les multiples zones comprises dans la zone masquée ; et également conçue spécifiquement pour coder une image vidéo de l'image vidéo capturée pour en faire les données vidéo de zone non masquée d'après les informations de description de la zone masquée, l'image vidéo correspondant à la zone non masquée.
  12. Système de surveillance vidéo, comprenant ; un terminal de surveillance et une plate-forme de surveillance, la plate-forme de surveillance communiquant avec le terminal de surveillance par un réseau d'émission ;
    le terminal de surveillance comprend une unité d'envoi de demande de vidéo, une unité de réception de données vidéo et une unité de lecture ; dans lequel :
    l'unité d'envoi de demande de vidéo est conçue pour envoyer une demande de vidéo à la plate-forme de surveillance, la demande de vidéo contenant un identifiant de dispositif, et les données vidéo d'une unité périphérique identifiée par l'identifiant de dispositif contenant des données vidéo de zone non masquée correspondant à une zone non masquée et des données vidéo de zone masquée correspondant à une zone masquée dont différentes zones ont des permissions respectives ;
    l'unité de réception de données vidéo est conçue pour recevoir des premières données vidéo de zone masquée et les données vidéo de zone non masquée qui sont envoyées par la plate-forme de surveillance lorsque la plate-forme de surveillance détermine qu'un utilisateur du terminal de surveillance a la permission d'acquérir les premières données vidéo de zone masquée dans les données vidéo de zone masquée, les premières données vidéo de zone masquée correspondant à une première zone masquée ayant une permission respective, et la première zone masquée comprenant une partie ou la totalité de la zone masquée ; et également conçue pour recevoir les données vidéo de zone non masquée qui sont envoyées par la plate-forme de surveillance lorsque la plate-forme de surveillance détermine qu'un utilisateur du terminal de surveillance n'a pas la permission d'acquérir les premières données vidéo de zone masquée dans les données vidéo de zone masquée ; et
    l'unité de lecture est conçue pour fusionner et lire les premières données vidéo de zone masquée et les données vidéo de zone non masquée qui sont reçues par l'unité de réception de données vidéo, ou conçue pour lire les données vidéo de zone non masquée reçues par l'unité de réception de données vidéo ; et
    la plate-forme de surveillance est spécifiquement la plate-forme de surveillance selon l'une quelconque des revendications 8 et 9.
  13. Système de surveillance vidéo selon la revendication 12, comprenant également une unité périphérique, dans lequel :
    l'unité périphérique est spécifiquement l'unité périphérique selon la revendication 10 ou 11.
EP12872312.9A 2012-10-11 2012-10-11 Procédé, appareil et système de mise en oeuvre d'une occlusion de vidéo Active EP2741237B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/082784 WO2014056171A1 (fr) 2012-10-11 2012-10-11 Procédé, appareil et système de mise en œuvre d'une occlusion de vidéo

Publications (3)

Publication Number Publication Date
EP2741237A1 EP2741237A1 (fr) 2014-06-11
EP2741237A4 EP2741237A4 (fr) 2014-07-16
EP2741237B1 true EP2741237B1 (fr) 2017-08-09

Family

ID=50476881

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12872312.9A Active EP2741237B1 (fr) 2012-10-11 2012-10-11 Procédé, appareil et système de mise en oeuvre d'une occlusion de vidéo

Country Status (3)

Country Link
EP (1) EP2741237B1 (fr)
CN (1) CN103890783B (fr)
WO (1) WO2014056171A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111614930A (zh) * 2019-02-22 2020-09-01 浙江宇视科技有限公司 一种视频监控方法、系统、设备及计算机可读存储介质

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105208340B (zh) * 2015-09-24 2019-10-18 浙江宇视科技有限公司 一种视频数据的显示方法和装置
CN107852523B (zh) * 2015-09-30 2021-01-19 苹果公司 用于在终端之间同步媒体渲染的方法、终端和设备
CN105866853B (zh) * 2016-04-13 2019-01-01 同方威视技术股份有限公司 安检监视控制系统和安检监视终端
CN106341664B (zh) * 2016-09-29 2019-12-13 浙江宇视科技有限公司 一种数据处理方法及装置
CN108206930A (zh) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 基于隐私遮蔽显示图像的方法及装置
CN110324704A (zh) * 2018-03-29 2019-10-11 优酷网络技术(北京)有限公司 视频处理方法及装置
CN109063499B (zh) * 2018-07-27 2021-02-26 山东鲁能软件技术有限公司 一种灵活可配置的电子档案区域授权方法及系统
US11030212B2 (en) * 2018-09-06 2021-06-08 International Business Machines Corporation Redirecting query to view masked data via federation table
CN110958410A (zh) * 2018-09-27 2020-04-03 北京嘀嘀无限科技发展有限公司 视频处理方法、装置及行车记录仪
CN111541779B (zh) * 2020-07-07 2021-05-25 德能森智能科技(成都)有限公司 一种基于云平台的智慧住宅系统
CN112954458A (zh) * 2021-01-20 2021-06-11 浙江大华技术股份有限公司 视频遮挡方法、装置、电子装置和存储介质
CN113014949B (zh) * 2021-03-10 2022-05-06 读书郎教育科技有限公司 一种智慧课堂课程回放的学生隐私保护系统及方法
WO2023092067A1 (fr) * 2021-11-18 2023-05-25 Parrot AI, Inc. Système et procédé de contrôle d'accès, de propriété de groupe et de rédaction d'enregistrements d'événements
CN114189660A (zh) * 2021-12-24 2022-03-15 威艾特科技(深圳)有限公司 一种基于全向摄像头的监控方法及其系统
CN114419720B (zh) * 2022-03-30 2022-10-18 浙江大华技术股份有限公司 一种图像遮挡方法、系统及计算机可读存储介质
WO2024050347A1 (fr) * 2022-08-31 2024-03-07 SimpliSafe, Inc. Zones de dispositif de sécurité

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509926B1 (en) * 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
FR2972886A1 (fr) * 2011-03-17 2012-09-21 Thales Sa Procede de compression/decompression de flux video partiellement masques, codeur et decodeur mettant en oeuvre le procede

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006070249A1 (fr) * 2004-12-27 2006-07-06 Emitall Surveillance S.A. Brouillage efficace de regions etudiees dans une image ou video de maniere a preserver la vie privee
JP4671133B2 (ja) * 2007-02-09 2011-04-13 富士フイルム株式会社 画像処理装置
CN101610396A (zh) * 2008-06-16 2009-12-23 北京智安邦科技有限公司 具有隐私保护的智能视频监控设备模组和系统及其监控方法
US8576282B2 (en) * 2008-12-12 2013-11-05 Honeywell International Inc. Security system with operator-side privacy zones
CN101710979B (zh) * 2009-12-07 2015-03-04 北京中星微电子有限公司 一种视频监控系统的管理方法及中央管理服务器
CN101848378A (zh) * 2010-06-07 2010-09-29 中兴通讯股份有限公司 一种家庭视频监控的装置、系统及方法
CN102547212A (zh) * 2011-12-13 2012-07-04 浙江元亨通信技术股份有限公司 多路视频图像的拼接方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509926B1 (en) * 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
FR2972886A1 (fr) * 2011-03-17 2012-09-21 Thales Sa Procede de compression/decompression de flux video partiellement masques, codeur et decodeur mettant en oeuvre le procede

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111614930A (zh) * 2019-02-22 2020-09-01 浙江宇视科技有限公司 一种视频监控方法、系统、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN103890783B (zh) 2017-02-22
CN103890783A (zh) 2014-06-25
EP2741237A1 (fr) 2014-06-11
EP2741237A4 (fr) 2014-07-16
WO2014056171A1 (fr) 2014-04-17

Similar Documents

Publication Publication Date Title
EP2741237B1 (fr) Procédé, appareil et système de mise en oeuvre d'une occlusion de vidéo
US10594988B2 (en) Image capture apparatus, method for setting mask image, and recording medium
US11023618B2 (en) Systems and methods for detecting modifications in a video clip
EP3459252B1 (fr) Procédé et appareil pour la diffusion en continu en direct à débit binaire adaptatif amélioré spatialement pour une lecture vidéo à 360 degrés
JP4346548B2 (ja) 画像データ配信システムならびにその画像データ送信装置および画像データ受信装置
KR102320455B1 (ko) 미디어 콘텐트를 전송하는 방법, 디바이스, 및 컴퓨터 프로그램
KR102384489B1 (ko) 정보 처리 장치, 정보 제공 장치, 제어 방법, 및 컴퓨터 판독가능 저장 매체
JP4877852B2 (ja) 画像符号化装置、および画像送信装置
US20180176650A1 (en) Information processing apparatus and information processing method
US10757463B2 (en) Information processing apparatus and information processing method
KR102133207B1 (ko) 통신장치, 통신 제어방법 및 통신 시스템
US20230045876A1 (en) Video Playing Method, Apparatus, and System, and Computer Storage Medium
US10636115B2 (en) Information processing apparatus, method for controlling the same, and storage medium
CN114679608B (zh) Vr视频加密播放方法、服务器、用户端、系统、电子设备和介质
JP2012137900A (ja) 映像出力システム、映像出力方法及びサーバ装置
US20120281066A1 (en) Information processing device and information processing method
WO2018044731A1 (fr) Systèmes et procédés destinés à la fourniture de réseau hybride d'objets d'intérêt dans une vidéo
JP7218105B2 (ja) ファイル生成装置、ファイル生成方法、処理装置、処理方法、及びプログラム
CN105103543B (zh) 兼容的深度依赖编码方法
WO2017030865A1 (fr) Procédé et systèmes d'affichage d'une partie d'un flux vidéo
KR20200000815A (ko) 송신장치, 송신방법, 수신장치, 수신방법, 및, 비일시적인 컴퓨터 판독가능한 기억매체
WO2023153473A1 (fr) Dispositif de traitement multimédia, dispositif de transmission et dispositif de réception
WO2023153472A1 (fr) Dispositif de traitement multimédia, dispositif de transmission et dispositif de réception
KR20190068345A (ko) 영상처리장치 및 그 제어방법
EP4044584A1 (fr) Procédé de génération de vidéo panoramique, procédé d'acquisition de vidéo et appareils associés

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20131003

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20140617

RIC1 Information provided on ipc code assigned before grant

Ipc: G08B 13/196 20060101ALI20140611BHEP

Ipc: G06K 9/60 20060101AFI20140611BHEP

17Q First examination report despatched

Effective date: 20150630

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170103

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170310

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 917570

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602012035854

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170809

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 917570

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171109

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171209

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171109

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171110

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012035854

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20180511

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171011

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20171031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171031

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171011

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171011

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20121011

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170809

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602012035854

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06K0009600000

Ipc: G06V0030200000

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230831

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230911

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230830

Year of fee payment: 12