WO2018166224A1 - 全景视频的目标追踪显示方法、装置及存储介质 - Google Patents

全景视频的目标追踪显示方法、装置及存储介质 Download PDF

Info

Publication number
WO2018166224A1
WO2018166224A1 PCT/CN2017/109937 CN2017109937W WO2018166224A1 WO 2018166224 A1 WO2018166224 A1 WO 2018166224A1 CN 2017109937 W CN2017109937 W CN 2017109937W WO 2018166224 A1 WO2018166224 A1 WO 2018166224A1
Authority
WO
WIPO (PCT)
Prior art keywords
target tracking
tracking object
display screen
panoramic video
color component
Prior art date
Application number
PCT/CN2017/109937
Other languages
English (en)
French (fr)
Inventor
王云华
Original Assignee
深圳Tcl新技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl新技术有限公司 filed Critical 深圳Tcl新技术有限公司
Publication of WO2018166224A1 publication Critical patent/WO2018166224A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Definitions

  • the present invention relates to the field of smart television technologies, and in particular, to a target tracking display method and apparatus for panoramic video, and a computer readable storage medium.
  • VR Virtual Reality
  • panoramic video is the basic factor of VR video. It converts static panoramic images into dynamic video images. Users can watch dynamic video in the range of shooting angles of panoramic cameras at will, thus creating an immersive feeling.
  • the main purpose of the present invention is to provide a target tracking display method and device for panoramic video, and a computer readable storage medium, which aims to realize automatic tracking display of target objects in panoramic video, thereby simplifying user operations and improving the viewing experience of the user.
  • the present invention provides a target tracking display method for panoramic video, the method comprising the following steps:
  • the step of tracking the target tracking object in the panoramic video according to the generated identification information to keep the target tracking object displayed in a current display screen includes:
  • the viewing angle currently displayed by the display screen is adjusted according to a preset rule to keep the target tracking object displayed in the current display screen.
  • the method before the step of determining, according to the generated identifier information, whether the target tracking object exists in a preset area of the current display screen edge, the method further includes:
  • the step of adjusting the viewing angle currently displayed by the display screen according to a preset rule to keep the target tracking object displayed in the current display screen includes:
  • the step of generating the identifier information of the target tracking object according to the preset rule includes:
  • the color component values inverted by each pixel are accumulated according to the color type correspondence, and the accumulated color component values are used as the identifiers of the target tracking objects.
  • the present invention further provides a target tracking display device for panoramic video, the device comprising:
  • a determining module configured to determine a target tracking object in the panoramic video according to the user's selection instruction
  • a generating module configured to generate identifier information of the target tracking object according to a preset rule
  • a tracking display module configured to track the target tracking object in the panoramic video according to the generated identification information, so that the target tracking object remains displayed in a current display screen.
  • the tracking display module includes:
  • a detecting unit configured to detect, according to the generated identification information, whether the target tracking object exists in a preset area of an edge of the current display screen
  • An adjusting unit configured to adjust a current viewing angle of the display screen according to a preset rule, if the target tracking object exists in a preset area of a current display edge, so that the target tracking object remains in the current display display.
  • the tracking display module further includes:
  • a calculating unit configured to calculate a pixel space occupied by the target tracking object
  • a setting unit configured to set a detection area of the target tracking object at an edge of the display screen according to the calculated size of the pixel space.
  • the adjusting unit is further configured to:
  • the generating module includes:
  • An acquiring unit configured to acquire a color component value of each pixel in the target tracking object
  • a color inversion unit configured to perform color inversion on the target tracking object according to the obtained color component value, to obtain a color component value after each pixel is inverted
  • the accumulating unit is configured to accumulate the color component values after the inversion of each pixel according to the color type correspondence, and use the accumulated color component values as the identifier of the target tracking object.
  • the present invention further provides a computer readable storage medium having a target tracking display program of panoramic video stored thereon, the target tracking display program of the panoramic video being executed by a processor The following steps are implemented:
  • the following steps are further implemented:
  • the viewing angle currently displayed by the display screen is adjusted according to a preset rule to keep the target tracking object displayed in the current display screen.
  • the following steps are further implemented:
  • the following steps are further implemented:
  • the following steps are further implemented:
  • the color component values inverted by each pixel are accumulated according to the color type correspondence, and the accumulated color component values are used as the identifiers of the target tracking objects.
  • the invention determines the target tracking object in the panoramic video according to the user's selection instruction; generates the identification information of the target tracking object according to the preset rule; and tracks the target tracking object in the panoramic video according to the generated identification information, In order to keep the target tracking object displayed in the current display.
  • the display terminal can automatically track the target object according to the identification information of the target tracking object, and the user does not need to manually adjust the display viewing angle of the display screen frequently, thereby simplifying the user operation and improving the user.
  • the viewing experience is a user's selection instruction.
  • FIG. 1 is a schematic flow chart of a first embodiment of a target tracking display method for a panoramic video according to the present invention
  • FIG. 2 is a schematic flow chart of a second embodiment of a target tracking display method for panoramic video according to the present invention
  • FIG. 3 is a schematic flowchart diagram of a third embodiment of a target tracking display method for panoramic video according to the present invention.
  • FIG. 4 is a schematic diagram showing a display interface of a target tracking object in a preset area on an upper edge of the display screen of the present invention
  • FIG. 5 is a schematic diagram showing the refinement step of step S20 in FIG. 1;
  • FIG. 6 is a schematic diagram of functional blocks of an embodiment of a target tracking display device for panoramic video according to the present invention.
  • FIG. 7 is a schematic diagram of a refinement function module of the tracking display module of FIG. 6;
  • FIG. 8 is a schematic diagram of another refinement function module of the tracking display module of FIG. 6;
  • FIG. 9 is a schematic diagram of a refinement function module of the generation module in FIG. 6.
  • the invention provides a target tracking display method for panoramic video.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a target tracking display method for panoramic video according to the present invention. The method includes the following steps:
  • Step S10 determining a target tracking object in the panoramic video according to the user's selection instruction
  • the application scenario of this embodiment may be: the smart TV acquires the panoramic video resource through the network or other methods, and plays the obtained panoramic video resource by using the related playing program.
  • the panoramic video is initially displayed with a default perspective, and the panoramic video is played.
  • the user views and finds the moving object of interest, selects it as the target tracking object, and the smart TV automatically tracks the movement of the object so that the object is always displayed in the display.
  • the smart TV can automatically recognize an object in the panoramic video that may generate a moving motion, such as a character, an animal, a vehicle, etc., and then the user sends a selection instruction to the smart television through a remote control, and the smart television receives the user's selection instruction. After that, an object in the playback screen is determined as the target tracking object; in addition, when the smart TV supports the touch screen operation, the user can also directly send a selection instruction to the smart TV through a touch operation, and the smart television can recognize the user's display-based click. , sliding, zooming, etc., to determine the target tracking object from the played panoramic video.
  • a moving motion such as a character, an animal, a vehicle, etc.
  • Step S20 Generate identification information of the target tracking object according to a preset rule.
  • the smart TV After the target tracking object is selected, the smart TV generates the identification information of the target tracking object according to the preset rule, and the identification information is used to uniquely identify the target tracking object, and may include numbers, characters, and names, types, and the like of the target tracking object.
  • the smart TV can obtain an image of the target tracking object, and then use a hash algorithm (such as the md5 algorithm) to calculate the hash value of the image as the target tracking object; for example, to obtain each pixel in the target tracking object.
  • the color component value is color-inverted according to the obtained color component value, and the color component value of each pixel is inverted, and the color component values of each pixel are inversed according to the color type correspondingly.
  • the accumulated color component value is used as the identifier of the target tracking object.
  • other algorithms can also be used to calculate the identifier of the target tracking object. In this way, when the target tracking object is subsequently tracked, it is possible to quickly determine whether the target tracking object exists in the panoramic video displayed on the current display screen by using the identification information.
  • Step S30 Track the target tracking object in the panoramic video according to the generated identification information, so that the target tracking object remains displayed in the current display screen.
  • the smart TV After generating the identification information of the target tracking object, the smart TV tracks the target tracking object in the panoramic video according to the generated identification information, so that the target tracking object remains displayed in the current display screen. Specifically, the smart television can detect whether the target tracking object has moved to the edge of the display screen according to the identification information, and if so, automatically adjust the current display angle of the display screen at this time, for example, the perspective of the panoramic video currently displayed by the smart television is a panoramic camera. 1 shooting angle, the smart TV detects that the target tracking object moves to the left border of the display screen, and automatically adjusts the current viewing angle of the display screen to the shooting angle of the panoramic camera 2, and the shooting angle of the panoramic camera 2 is relative to the panoramic camera.
  • the smart TV can also obtain the moving direction and moving distance of the target tracking object within a preset time period, and adjust the viewing angle of the display screen according to the moving direction and the moving distance, so that the target tracking object remains at the center of the display screen for display.
  • Flexible settings can be made in the specific implementation.
  • the target tracking object in the panoramic video is determined according to the user's selection instruction; the identification information of the target tracking object is generated according to a preset rule; and the tracking information is tracked in the panoramic video according to the generated identification information.
  • the target tracks the object so that the target tracking object remains displayed in the current display.
  • FIG. 2 is a schematic flowchart diagram of a second embodiment of a target tracking display method for panoramic video according to the present invention. Based on the embodiment shown in FIG. 1 above, step S30 may include:
  • Step S31 detecting, according to the generated identification information, whether the target tracking object exists in a preset area of a current display screen edge;
  • Step S32 If the target tracking object exists in the preset area of the current display screen edge, adjust the viewing angle currently displayed by the display screen according to a preset rule, so that the target tracking object remains displayed in the current display screen.
  • the smart television after generating the identification information of the target tracking object, the smart television detects whether there is a target tracking object in the preset area of the current display edge according to the generated identification information; if the current display edge has a preset area memory
  • the viewing angle currently displayed by the display screen is adjusted according to a preset rule, so that the target tracking object remains displayed in the current display. For example, when the smart TV detects that the target tracking object moves to the preset area on the left edge of the display screen at a certain moment, the target tracking object is likely to continue to move to the left. At this time, the viewing angle currently displayed by the display screen should be adjusted correspondingly. In order to keep the target tracking object displayed in the current display.
  • the specific step of the smart television detecting whether there is a target tracking object in the preset area at the edge of the current display screen according to the identification information may be: the smart television determines the target tracking object in the panoramic video at a certain moment and calculates according to the preset rule.
  • the identification value the next time the smart TV obtains all the objects in the preset area at the edge of the display screen that may generate a moving action and calculates the identification value according to the same rule, and the calculated identification value and the identification value of the target tracking object Matching is performed, and if the matching is successful, it can be determined that the target tracking object has moved to the preset area at the edge of the display screen.
  • step S32 may include:
  • Step S321 acquiring location information of the target tracking object in a preset area of the edge of the display screen
  • Step S322 adjusting the viewing angle currently displayed by the display screen according to the obtained location information, so that the target tracking object remains displayed in the current display screen.
  • the smart TV When it is detected that the target tracking object exists in the preset area at the edge of the current display screen, the smart TV further acquires the position information of the target tracking object in the preset area at the edge of the display screen, and then adjusts the current display according to the obtained position information.
  • the angle of view displayed so that the target tracking object remains displayed in the current display For example, if the target tracking object is acquired in the preset area on the left edge of the display screen, the current display angle of the display screen is moved to the left; if the target tracking object is acquired in the preset area on the upper edge of the display screen, Corresponding to moving the angle of view currently displayed on the display upward.
  • the range of the angle of view movement or rotation may be set in advance, or may be determined according to the moving distance of the target tracking object.
  • the moving condition of the target tracking object can be effectively determined.
  • the viewing angle currently displayed by the display screen is adjusted. It can ensure that the target tracking object remains displayed in the current display without disappearing into the display, which enhances the user's viewing experience.
  • FIG. 3 is a schematic flowchart diagram of a third embodiment of a target tracking display method for panoramic video according to the present invention. Based on the embodiment shown in FIG. 2, before step S31, the method may further include:
  • Step S33 calculating a pixel space occupied by the target tracking object
  • Step S34 setting a detection area of the target tracking object on the edge of the display screen according to the calculated pixel space size.
  • the smart television may first calculate the pixel space occupied by the target tracking object. Then, according to the calculated pixel space size, the detection area of the target tracking object is set at the edge of the display screen.
  • FIG. 4 is a schematic diagram of a display interface of a target tracking object in a preset area on an upper edge of a display screen according to the present invention.
  • the target tracking object at each edge of the display is set.
  • the length of the area in the horizontal direction should be not less than 60 pixels, and the height in the vertical direction should be not less than 80 pixels.
  • the size of the detection area can be flexibly set according to actual needs.
  • the detection area of the target tracking object is set according to the pixel space occupied by the target tracking object, which can ensure the rationality of the detection area range setting, thereby reducing the calculation amount of the smart TV and improving the detection efficiency.
  • step S20 may include:
  • Step S21 acquiring color component values of each pixel in the target tracking object
  • Step S22 performing color inversion on the target tracking object according to the obtained color component value, and obtaining a color component value after each pixel is inverted;
  • step S23 the color component values inverted by each pixel are accumulated according to the color type correspondence, and the accumulated color component values are used as the identifier of the target tracking object.
  • each pixel that constitutes the target tracking object has its corresponding color, and the color of the pixel is generated by mixing three colors of red, green, and blue
  • the color of each pixel can be used.
  • Red, green, and blue are represented by three color components. For example, (100, 200, 100) may indicate that the red color component constituting the pixel is 100, the green color component is 200, and the blue color component is 100.
  • the manner in which the smart television generates the identification information of the target tracking object according to the preset rule may be: after the user selects the target tracking object, the color component value of each pixel in the target tracking object is obtained, and then according to the acquired color component. The value is inversed by the color of the target tracking object, and the color component value of each pixel is inverted, and then the color component values of each pixel are inversed according to the color type, and the accumulated color component values are used as targets. Track the identity of the object.
  • the color is inverted, that is, each color component value is subtracted by 255, and a black pixel (255, 255, 255) is obtained. If the color component values of each pixel in the target tracking object are (100, 200, 100), (100, 150, 100), ..., the target tracking object is color inverted.
  • the color component values after each pixel are inverted that is, (155, 55, 155), (155, 105, 155), ..., the color component values after each pixel are inverted are accumulated according to the color type corresponding, that is, (155+155+..., 55+105+..., 155+155+%), if the color component values are accumulated (a, b, c), then (a, b, c) is used as the target tracking target .
  • the color of the constituent pixels is generally constant, so the color component values obtained by the above calculation method can be used as the identifier of the target tracking object.
  • the target tracking object can be quickly distinguished from other moving objects by matching the color component values.
  • the invention also provides a target tracking display device for panoramic video.
  • FIG. 6 is a schematic diagram of functional modules of an embodiment of a target tracking display device for panoramic video according to the present invention.
  • the device includes:
  • a determining module 10 configured to determine a target tracking object in the panoramic video according to the user's selection instruction
  • the application scenario of this embodiment may be: the smart TV acquires the panoramic video resource through the network or other methods, and plays the obtained panoramic video resource by using the related playing program.
  • the panoramic video is initially displayed with a default perspective, and the panoramic video is played.
  • the user views and finds the moving object of interest, selects it as the target tracking object, and the smart TV automatically tracks the movement of the object so that the object is always displayed in the display.
  • the smart TV can automatically recognize an object in the panoramic video that may generate a moving motion, such as a character, an animal, a vehicle, etc., and then the user sends a selection instruction to the smart television through the remote control, and determines that the module 10 receives the user's selection. After the instruction, an object in the play screen is determined as the target tracking object; in addition, when the smart TV supports the touch screen operation, the user can also directly send a selection instruction to the smart TV through the touch operation, and the determining module 10 can identify the user based on the display screen. Click, slide, zoom, etc. to determine the target tracking object from the played panoramic video.
  • a moving motion such as a character, an animal, a vehicle, etc.
  • the generating module 20 is configured to generate identifier information of the target tracking object according to a preset rule
  • the generating module 20 After the target tracking object is selected, the generating module 20 generates the identification information of the target tracking object according to the preset rule, where the identification information is used to uniquely identify the target tracking object, and may include numbers, characters, and names, types, and the like of the target tracking objects. For example, the generating module 20 may obtain an image of the target tracking object, and then use a hash algorithm (such as the md5 algorithm) to calculate the hash value of the image as the identifier of the target tracking object; for example, the generating module 20 may acquire the target tracking object.
  • the color component value of each pixel is color-inverted according to the obtained color component value, and the color component value after each pixel is inverted is obtained, and the color component value of each pixel is inverted according to the color type.
  • the accumulated color component value is used as the identifier of the target tracking object.
  • other algorithms can also be used to calculate the identifier of the target tracking object. In this way, when the target tracking object is subsequently tracked, it is possible to quickly determine whether the target tracking object exists in the panoramic video displayed on the current display screen by using the identification information.
  • the tracking display module 30 is configured to track the target tracking object in the panoramic video according to the generated identification information, so that the target tracking object remains displayed in the current display screen.
  • the tracking display module 30 tracks the target tracking object in the panoramic video according to the generated identification information, so that the target tracking object remains displayed in the current display screen. Specifically, the tracking display module 30 can detect whether the target tracking object has moved to the edge of the display screen according to the identification information, and if so, automatically adjust the currently displayed viewing angle of the display screen, for example, the viewing angle of the panoramic video currently displayed by the smart television is The tracking angle of the panoramic camera 1 is detected.
  • the tracking display module 30 detects that the target tracking object moves to the left border of the display screen, and automatically adjusts the current viewing angle of the display screen to the shooting angle of the panoramic camera 2, and the shooting angle of the panoramic camera 2
  • the predetermined angle is rotated counterclockwise with respect to the panoramic camera 1, whereby it is ensured that the target tracking object is always displayed in the display screen without disappearing in the display screen.
  • the tracking display module 30 can also acquire the moving direction and the moving distance of the target tracking object within a preset time period, and adjust the viewing angle of the display screen according to the moving direction and the moving distance, so that the target tracking object remains at the center of the display screen at all times. Display is performed, and flexible settings can be made in the specific implementation.
  • the determining module 10 determines the target tracking object in the panoramic video according to the user's selection instruction; the generating module 20 generates the identification information of the target tracking object according to the preset rule; and the tracking display module 30 generates the identifier according to the identifier.
  • the information tracks the target tracking object in the panoramic video to keep the target tracking object displayed in the current display.
  • the display terminal can automatically track the target object according to the identification information of the target tracking object, and the user does not need to manually adjust the display viewing angle of the display screen frequently, thereby simplifying the user operation and improving the user. The viewing experience.
  • FIG. 7 is a schematic diagram of a refinement function module of the tracking display module of FIG. 6.
  • the tracking display module 30 can include:
  • the detecting unit 31 is configured to detect, according to the generated identification information, whether the target tracking object exists in a preset area of a current display screen edge;
  • the adjusting unit 32 is configured to adjust a current viewing angle of the display screen according to a preset rule, if the target tracking object exists in a preset area of a current display edge, so that the target tracking object remains in the current display Shown in .
  • the detecting unit 31 detects, according to the generated identification information, whether there is a target tracking object in the preset area of the current display edge; if the current display edge The target tracking object exists in the preset area, and the adjusting unit 32 adjusts the viewing angle currently displayed by the display screen according to the preset rule, so that the target tracking object remains displayed in the current display screen. For example, the detecting unit 31 detects that the target tracking object moves to a preset area on the left edge of the display screen at a certain time, and the target tracking object is likely to continue moving to the left. At this time, the adjusting unit 32 should adjust the display correspondingly. The currently displayed viewing angle so that the target tracking object remains displayed in the current display.
  • the detecting unit 31 detects whether the target tracking object exists in the preset area of the current display screen edge according to the identification information, and may determine that the target tracking object in the panoramic video is determined at a certain time and is calculated according to a preset rule.
  • the identification value the next time to obtain all the objects in the preset area of the display screen edge that may generate a moving action and calculate the identification value according to the same rule, and match the calculated identification value with the identification value of the target tracking object. If the matching is successful, it can be determined that the target tracking object has moved to the preset area at the edge of the display screen.
  • the adjusting unit 32 is further configured to: acquire location information of the target tracking object in a preset area of the edge of the display screen; and adjust a current viewing angle of the display screen according to the acquired location information. So that the target tracking object remains displayed in the current display.
  • the adjusting unit 32 When the detecting unit 31 detects that the target tracking object exists in the preset area of the current display screen edge, the adjusting unit 32 further acquires the position information of the target tracking object in the preset area of the display screen edge, and then according to the acquired position.
  • the information adjusts the angle of view currently displayed on the display so that the target tracking object remains displayed in the current display. For example, if the target tracking object is acquired in the preset area on the left edge of the display screen, the current display angle of the display screen is moved to the left; if the target tracking object is acquired in the preset area on the upper edge of the display screen, Corresponding to moving the angle of view currently displayed on the display upward.
  • the range of the angle of view movement or rotation may be set in advance, or may be determined according to the moving distance of the target tracking object.
  • the moving condition of the target tracking object can be effectively determined.
  • the viewing angle currently displayed by the display screen is adjusted. It can ensure that the target tracking object remains displayed in the current display without disappearing into the display, which enhances the user's viewing experience.
  • FIG. 8 is a schematic diagram of another refinement function module of the tracking display module of FIG. Based on the embodiment shown in FIG. 7, the tracking display module 30 may further include:
  • the calculating unit 33 is configured to calculate a pixel space occupied by the target tracking object
  • the setting unit 34 is configured to set a detection area of the target tracking object at the edge of the display screen according to the calculated size of the pixel space.
  • the detection area of the target tracking object since the detection area of the target tracking object is not set too small, it should not be set too large. If the setting is too small, the target tracking object will not be detected in the preset area. Increase the amount of computing on the smart TV.
  • the calculating unit 33 may first calculate the target tracking object. The pixel space size is then set by the setting unit 34 to set the detection area of the target tracking object at the edge of the display screen according to the calculated pixel space size.
  • FIG. 4 is a schematic diagram of a display interface of a target tracking object in a preset area on an upper edge of a display screen according to the present invention.
  • the calculation unit 33 calculates that the pixel space occupied by the target tracking object is 60 ⁇ 80 (the length in the horizontal direction is 60 pixels, and the height in the vertical direction is 80 pixels), and the setting unit 34 sets the target tracking on each edge of the display screen.
  • the detection area of the object should be no less than 60 pixels in the horizontal direction and not less than 80 pixels in the vertical direction. On this basis, the size of the detection area can be flexibly set according to actual needs.
  • the detection area of the target tracking object is set according to the pixel space occupied by the target tracking object, which can ensure the rationality of the detection area range setting, thereby reducing the calculation amount of the smart TV and improving the detection efficiency.
  • FIG. 9 is a schematic diagram of a refinement function module of the generation module in FIG. 6.
  • the generating module 20 may include:
  • An obtaining unit 21 configured to acquire a color component value of each pixel in the target tracking object
  • a color inversion unit 22 configured to perform color inversion on the target tracking object according to the obtained color component value, to obtain a color component value after each pixel is inverted;
  • the accumulating unit 23 is configured to accumulate the color component values after the inversion of each pixel according to the color type correspondence, and use the accumulated color component values as the identifier of the target tracking object.
  • each pixel that constitutes the target tracking object has its corresponding color, and the color of the pixel is generated by mixing three colors of red, green, and blue
  • the color of each pixel can be used.
  • Red, green, and blue are represented by three color components. For example, (100, 200, 100) may indicate that the red color component constituting the pixel is 100, the green color component is 200, and the blue color component is 100.
  • the manner in which the generating module 20 generates the identification information of the target tracking object according to the preset rule may be: after the user selects the target tracking object, the acquiring unit 21 acquires the color component value of each pixel in the target tracking object, and the color is inverted.
  • the unit 22 performs color inversion on the target tracking object according to the obtained color component value, and obtains the color component value after each pixel is inverted, and the accumulating unit 23 performs the color component value of each pixel inversion according to the color type. Accumulate, the accumulated color component value is used as the identifier of the target tracking object.
  • the color is inverted, that is, each color component value is subtracted by 255, and a black pixel (255, 255, 255) is obtained.
  • the color inversion unit 22 will target the target tracking object.
  • the color component values after each pixel are inverted are obtained, that is, (155, 55, 155), (155, 105, 155), ..., the color component of the inverse unit 23 for each pixel is inverted.
  • the values are accumulated according to the color type corresponding, that is, (155+155+..., 55+105+..., 155+155+). If the color component values are accumulated (a, b, c), then (a, b) , c) the identity of the target tracking object.
  • the color of the constituent pixels is generally constant, so the color component values obtained by the above calculation method can be used as the identifier of the target tracking object.
  • the target tracking object can be quickly distinguished from other moving objects by matching the color component values.
  • the invention also provides a computer readable storage medium.
  • the target readable display program of the panoramic video is stored on the computer readable storage medium of the present invention.
  • the target tracking display program of the panoramic video is executed by the processor, the following steps are implemented:
  • the viewing angle currently displayed by the display screen is adjusted according to a preset rule to keep the target tracking object displayed in the current display screen.
  • the color component values inverted by each pixel are accumulated according to the color type correspondence, and the accumulated color component values are used as the identifiers of the target tracking objects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开了一种全景视频的目标追踪显示方法,该方法包括:根据用户的选择指令确定全景视频中的目标追踪对象;按照预设规则生成所述目标追踪对象的标识信息;根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。本发明还公开了一种全景视频的目标追踪显示装置和一种计算机可读存储介质。本发明能够实现全景视频中目标对象的自动追踪显示,从而简化用户操作,提升用户的观影体验。

Description

全景视频的目标追踪显示方法、装置及存储介质
技术领域
本发明涉及智能电视技术领域,尤其涉及全景视频的目标追踪显示方法、装置及计算机可读存储介质。
背景技术
近年来,VR(Virtual Reality,虚拟现实)技术已经开始运用在智能电视中。全景视频是VR视频的基本因素,其是将静态的全景图片转化为动态的视频图像,用户能够任意观看在全景摄像机拍摄角度范围内的动态视频,从而产生身临其境之感。
目前,当用户通过智能电视观看全景视频时,由于智能电视某一时刻只能显示全景视频中的部分视角,当用户想要观看的目标对象从全景视频的一个视角移动到另一个视角时,目标对象就会消失在当前显示的视角中,此时用户需要操作遥控器调整视角来实现目标追踪,且目标对象移动地越频繁,用户的遥控操作也会越频繁,不仅操作繁琐,同时也影响了用户的观影体验。
发明内容
本发明的主要目的在于提出一种全景视频的目标追踪显示方法、装置及计算机可读存储介质,旨在实现全景视频中目标对象的自动追踪显示,从而简化用户操作,提升用户的观影体验。
为实现上述目的,本发明提供一种全景视频的目标追踪显示方法,所述方法包括如下步骤:
根据用户的选择指令确定全景视频中的目标追踪对象;
按照预设规则生成所述目标追踪对象的标识信息;
根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示的步骤包括:
根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
若是,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述根据生成的所述标识信息判断当前显示屏边缘的预设区域内是否存在所述目标追踪对象的步骤之前,还包括:
计算所述目标追踪对象所占的像素空间大小;
根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
可选地,所述按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示的步骤包括:
获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述按照预设规则生成所述目标追踪对象的标识信息的步骤包括:
获取所述目标追踪对象中每个像素的颜色分量值;
根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
此外,为实现上述目的,本发明还提供一种全景视频的目标追踪显示装置,所述装置包括:
确定模块,用于根据用户的选择指令确定全景视频中的目标追踪对象;
生成模块,用于按照预设规则生成所述目标追踪对象的标识信息;
追踪显示模块,用于根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述追踪显示模块包括:
侦测单元,用于根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
调整单元,用于若当前显示屏边缘的预设区域内存在所述目标追踪对象,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述追踪显示模块还包括:
计算单元,用于计算所述目标追踪对象所占的像素空间大小;
设置单元,用于根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
可选地,所述调整单元还用于:
获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述生成模块包括:
获取单元,用于获取所述目标追踪对象中每个像素的颜色分量值;
颜色取反单元,用于根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
累加单元,用于将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
此外,为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有全景视频的目标追踪显示程序,所述全景视频的目标追踪显示程序被处理器执行时实现如下步骤:
根据用户的选择指令确定全景视频中的目标追踪对象;
按照预设规则生成所述目标追踪对象的标识信息;
根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
若是,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
计算所述目标追踪对象所占的像素空间大小;
根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
可选地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
可选地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
获取所述目标追踪对象中每个像素的颜色分量值;
根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
本发明根据用户的选择指令确定全景视频中的目标追踪对象;按照预设规则生成所述目标追踪对象的标识信息;根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。通过上述方式,当目标追踪对象在全景视频中移动时,显示终端能够根据目标追踪对象的标识信息自动追踪目标对象,用户不必频繁地手动调整显示屏的显示视角,从而简化了用户操作,提升了用户的观影体验。
附图说明
图1为本发明全景视频的目标追踪显示方法第一实施例的流程示意图;
图2为本发明全景视频的目标追踪显示方法第二实施例的流程示意图;
图3为本发明全景视频的目标追踪显示方法第三实施例的流程示意图;
图4为本发明显示屏上边缘的预设区域内存在目标追踪对象的显示界面示意图;
图5为图1中步骤S20的细化步骤示意图;
图6为本发明全景视频的目标追踪显示装置一实施例的功能模块示意图;
图7为图6中追踪显示模块的细化功能模块示意图;
图8为图6中追踪显示模块的另一细化功能模块示意图;
图9为图6中生成模块的细化功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种全景视频的目标追踪显示方法。
参照图1,图1为本发明全景视频的目标追踪显示方法第一实施例的流程示意图。所述方法包括如下步骤:
步骤S10,根据用户的选择指令确定全景视频中的目标追踪对象;
本实施例的应用场景可以为:智能电视通过网络或其他方式获取全景视频资源,并使用相关播放程序播放获取到的全景视频资源,全景视频最初以一个默认的视角进行显示,在播放全景视频的过程中,用户观看并找到其感兴趣的移动对象,将其选定为目标追踪对象,智能电视即自动追踪该对象的移动,使该对象始终在显示屏中进行显示。
具体地,智能电视可通过轮廓自动识别全景视频中可能会产生移动动作的物体,如人物、动物、车辆等,然后,用户通过遥控向智能电视发送选择指令,智能电视在接收到用户的选择指令后确定播放画面中的某一物体作为目标追踪对象;此外,在智能电视支持触屏操作时,用户还可直接通过触摸操作向智能电视发送选择指令,智能电视可通过识别用户基于显示屏的点击、滑动、放大等操作,从而从播放的全景视频中确定目标追踪对象。
步骤S20,按照预设规则生成所述目标追踪对象的标识信息;
在选定目标追踪对象后,智能电视按照预设规则生成目标追踪对象的标识信息,该标识信息用于唯一标识目标追踪对象,可以包括数字、字符以及目标追踪对象的名称、类型等。比如,智能电视可获取目标追踪对象的图像,然后采用哈希算法(如md5算法)计算得到该图像的哈希值作为目标追踪对象的标识;又如,可获取目标追踪对象中每个像素的颜色分量值,根据获取到的颜色分量值对目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值,将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。当然,也可以采用其他算法计算得到目标追踪对象的标识。如此,后续对目标追踪对象进行追踪时,就能够通过标识信息快速判断当前显示屏显示的全景视频中是否存在目标追踪对象。
步骤S30,根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
在生成所述目标追踪对象的标识信息后,智能电视根据生成的标识信息在全景视频中追踪目标追踪对象,以使目标追踪对象保持在当前显示屏中显示。具体地,智能电视可根据标识信息侦测目标追踪对象是否已经移动到显示屏边缘,若是,则此时自动调整显示屏当前显示的视角,比如,智能电视当前显示的全景视频的视角是全景摄像机1的拍摄视角,智能电视侦测到目标追踪对象移动至显示屏左边框,此时自动将显示屏当前显示的视角调整为全景摄像机2的拍摄视角,且全景摄像机2的拍摄视角相对于全景摄像机1逆时针旋转预定角度,由此,就能够保证目标追踪对象始终在显示屏中显示而不至于消失在显示屏中。当然,智能电视也可以在预设时长内获取目标追踪对象的移动方向和移动距离,根据移动方向和移动距离对应调整显示屏的视角,以使目标追踪对象一直保持在显示屏的中心位置进行显示,具体实施中可进行灵活设置。
在本实施例中,根据用户的选择指令确定全景视频中的目标追踪对象;按照预设规则生成所述目标追踪对象的标识信息;根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。通过上述方式,当目标追踪对象在全景视频中移动时,显示终端能够根据目标追踪对象的标识信息自动追踪目标对象,用户不必频繁地手动调整显示屏的显示视角,从而简化了用户操作,提升了用户的观影体验。
进一步地,参照图2,图2为本发明全景视频的目标追踪显示方法第二实施例的流程示意图。基于上述图1所示的实施例,步骤S30可以包括:
步骤S31,根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
步骤S32,若当前显示屏边缘的预设区域内存在所述目标追踪对象,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
在本实施例中,智能电视在生成目标追踪对象的标识信息后,根据生成的标识信息侦测当前显示屏边缘的预设区域内是否存在目标追踪对象;若当前显示屏边缘的预设区域内存在目标追踪对象,则按照预设规则调整显示屏当前显示的视角,以使目标追踪对象保持在当前显示屏中显示。比如,智能电视在某一时刻侦测到目标追踪对象移动到显示屏左边缘的预设区域内,该目标追踪对象很有可能继续向左移动,此时应对应调整显示屏当前显示的视角,以使目标追踪对象保持在当前显示屏中显示。
上述步骤中,智能电视根据标识信息侦测当前显示屏边缘的预设区域内是否存在目标追踪对象的具体步骤可以为:某一时刻智能电视确定全景视频中的目标追踪对象并按照预设规则计算其标识值,下一时刻智能电视获取显示屏边缘的预设区域内所有可能会产生移动动作的物体并按照相同的规则分别计算其标识值,将计算得到的标识值与目标追踪对象的标识值进行匹配,若匹配成功,则可判定目标追踪对象已经移动到显示屏边缘的预设区域内。
进一步地,步骤S32可以包括:
步骤S321,获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
步骤S322,根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
在侦测到当前显示屏边缘的预设区域内存在目标追踪对象时,智能电视进一步获取目标追踪对象在显示屏边缘的预设区域中的位置信息,然后根据获取到的位置信息调整显示屏当前显示的视角,以使目标追踪对象保持在当前显示屏中显示。比如,若获取到目标追踪对象在显示屏左边缘的预设区域中,则对应将显示屏当前显示的视角向左移动;若获取到目标追踪对象在显示屏上边缘的预设区域中,则对应将显示屏当前显示的视角向上移动。其中,视角移动或旋转的范围可预先进行设置,也可根据目标追踪对象的移动距离而定。
本实施例通过侦测显示屏边缘的预设区域内是否存在目标追踪对象,能够对目标追踪对象的移动情况进行有效判定,当目标追踪对象处于显示屏边缘时,调整显示屏当前显示的视角,能够保证目标追踪对象保持在当前显示屏中显示而不至于消失在显示屏中,提升了用户的观影体验。
进一步地,参照图3,图3为本发明全景视频的目标追踪显示方法第三实施例的流程示意图。基于上述图2所示的实施例,在步骤S31之前,还可以包括:
步骤S33,计算所述目标追踪对象所占的像素空间大小;
步骤S34,根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
在本实施例中,由于目标追踪对象的侦测区域不宜设置得过小,也不宜设置得过大,设置过小会导致在预设区域内侦测不到目标追踪对象,设置过大会则会增加智能电视的运算量。为保证侦测区域范围设置的合理性,在根据生成的标识信息侦测当前显示屏边缘的预设区域内是否存在目标追踪对象之前,智能电视可以首先计算目标追踪对象所占的像素空间大小,然后根据计算得到的像素空间大小在所述显示屏边缘设置目标追踪对象的侦测区域。
参照图4,图4为本发明显示屏上边缘的预设区域内存在目标追踪对象的显示界面示意图。比如,智能电视计算得到目标追踪对象所占的像素空间大小为60×80(水平方向长度为60像素,竖直方向高度为80像素),则设置显示屏每个边缘的目标追踪对象的侦测区域在水平方向上的长度应不小于60像素,而在竖直方向的高度应不小于80像素,在此基础上侦测区域的大小可根据实际需要进行灵活设置。
本实施例根据目标追踪对象所占的像素空间大小设置目标追踪对象的侦测区域,能够保证侦测区域范围设置的合理性,从而能够降低智能电视的运算量,提高侦测效率。
进一步地,参照图5,图5为图1中步骤S20的细化步骤示意图。基于上述的实施例,步骤S20可以包括:
步骤S21,获取所述目标追踪对象中每个像素的颜色分量值;
步骤S22,根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
步骤S23,将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
由于组成目标追踪对象的每个像素点都有其对应的颜色,且像素点的颜色其都是由红、绿、蓝三种颜色进行混合后生成的,因此,每个像素点的颜色可以用红、绿、蓝三种颜色分量表示。如(100,200,100)可表示组成该像素的红颜色分量为100,绿颜色分量为200,蓝颜色分量为100。
本实施例智能电视按照预设规则生成目标追踪对象的标识信息的方式可以为:在用户选定目标追踪对象后,获取目标追踪对象中每个像素的颜色分量值,然后根据获取到的颜色分量值对目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值,再将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为目标追踪对象的标识。
比如,对于一个白色像素点(0,0,0),将其进行颜色取反,即用255分别减去每个颜色分量值,则得到一个黑色像素点(255,255,255)。若智能电视获取到目标追踪对象中每个像素的颜色分量值分别为(100,200,100),(100,150,100),……,则将所述目标追踪对象进行颜色取反后得到每个像素取反后的颜色分量值,即(155,55,155),(155,105,155),……,将每个像素取反后的颜色分量值按照颜色类型对应进行累加,即(155+155+…,55+105+…,155+155+…),若累加得到颜色分量值为(a,b,c),则将(a,b,c)作为目标追踪对象的标识。
对于某些目标追踪对象(比如汽车),其组成像素的颜色一般是不变的,因而通过上述计算方式得到的颜色分量值可以作为目标追踪对象的标识,在显示屏中的移动物体较多时,通过颜色分量值的匹配即可将目标追踪对象和其他移动物体快速区分开来。
本发明还提供一种全景视频的目标追踪显示装置。
参照图6,图6为本发明全景视频的目标追踪显示装置一实施例的功能模块示意图。所述装置包括:
确定模块10,用于根据用户的选择指令确定全景视频中的目标追踪对象;
本实施例的应用场景可以为:智能电视通过网络或其他方式获取全景视频资源,并使用相关播放程序播放获取到的全景视频资源,全景视频最初以一个默认的视角进行显示,在播放全景视频的过程中,用户观看并找到其感兴趣的移动对象,将其选定为目标追踪对象,智能电视即自动追踪该对象的移动,使该对象始终在显示屏中进行显示。
具体地,智能电视可通过轮廓自动识别全景视频中可能会产生移动动作的物体,如人物、动物、车辆等,然后,用户通过遥控向智能电视发送选择指令,确定模块10在接收到用户的选择指令后确定播放画面中的某一物体作为目标追踪对象;此外,在智能电视支持触屏操作时,用户还可直接通过触摸操作向智能电视发送选择指令,确定模块10可通过识别用户基于显示屏的点击、滑动、放大等操作,从而从播放的全景视频中确定目标追踪对象。
生成模块20,用于按照预设规则生成所述目标追踪对象的标识信息;
在选定目标追踪对象后,生成模块20按照预设规则生成目标追踪对象的标识信息,该标识信息用于唯一标识目标追踪对象,可以包括数字、字符以及目标追踪对象的名称、类型等。比如,生成模块20可获取目标追踪对象的图像,然后采用哈希算法(如md5算法)计算得到该图像的哈希值作为目标追踪对象的标识;又如,生成模块20可获取目标追踪对象中每个像素的颜色分量值,根据获取到的颜色分量值对目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值,将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。当然,也可以采用其他算法计算得到目标追踪对象的标识。如此,后续对目标追踪对象进行追踪时,就能够通过标识信息快速判断当前显示屏显示的全景视频中是否存在目标追踪对象。
追踪显示模块30,用于根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
在生成模块20生成所述目标追踪对象的标识信息后,追踪显示模块30根据生成的标识信息在全景视频中追踪目标追踪对象,以使目标追踪对象保持在当前显示屏中显示。具体地,追踪显示模块30可根据标识信息侦测目标追踪对象是否已经移动到显示屏边缘,若是,则此时自动调整显示屏当前显示的视角,比如,智能电视当前显示的全景视频的视角是全景摄像机1的拍摄视角,追踪显示模块30侦测到目标追踪对象移动至显示屏左边框,此时自动将显示屏当前显示的视角调整为全景摄像机2的拍摄视角,且全景摄像机2的拍摄视角相对于全景摄像机1逆时针旋转预定角度,由此,就能够保证目标追踪对象始终在显示屏中显示而不至于消失在显示屏中。当然,追踪显示模块30也可以在预设时长内获取目标追踪对象的移动方向和移动距离,根据移动方向和移动距离对应调整显示屏的视角,以使目标追踪对象一直保持在显示屏的中心位置进行显示,具体实施中可进行灵活设置。
在本实施例中,确定模块10根据用户的选择指令确定全景视频中的目标追踪对象;生成模块20按照预设规则生成所述目标追踪对象的标识信息;追踪显示模块30根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。通过上述方式,当目标追踪对象在全景视频中移动时,显示终端能够根据目标追踪对象的标识信息自动追踪目标对象,用户不必频繁地手动调整显示屏的显示视角,从而简化了用户操作,提升了用户的观影体验。
进一步地,参照图7,图7为图6中追踪显示模块的细化功能模块示意图。基于上述图6所示的实施例,追踪显示模块30可以包括:
侦测单元31,用于根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
调整单元32,用于若当前显示屏边缘的预设区域内存在所述目标追踪对象,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
在本实施例中,生成模块20在生成目标追踪对象的标识信息后,侦测单元31根据生成的标识信息侦测当前显示屏边缘的预设区域内是否存在目标追踪对象;若当前显示屏边缘的预设区域内存在目标追踪对象,则调整单元32按照预设规则调整显示屏当前显示的视角,以使目标追踪对象保持在当前显示屏中显示。比如,侦测单元31在某一时刻侦测到目标追踪对象移动到显示屏左边缘的预设区域内,该目标追踪对象很有可能继续向左移动,此时调整单元32应对应调整显示屏当前显示的视角,以使目标追踪对象保持在当前显示屏中显示。
上述步骤中,侦测单元31根据标识信息侦测当前显示屏边缘的预设区域内是否存在目标追踪对象的具体步骤可以为:某一时刻确定全景视频中的目标追踪对象并按照预设规则计算其标识值,下一时刻获取显示屏边缘的预设区域内所有可能会产生移动动作的物体并按照相同的规则分别计算其标识值,将计算得到的标识值与目标追踪对象的标识值进行匹配,若匹配成功,则可判定目标追踪对象已经移动到显示屏边缘的预设区域内。
进一步地,所述调整单元32还用于:获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
在侦测单元31侦测到当前显示屏边缘的预设区域内存在目标追踪对象时,调整单元32进一步获取目标追踪对象在显示屏边缘的预设区域中的位置信息,然后根据获取到的位置信息调整显示屏当前显示的视角,以使目标追踪对象保持在当前显示屏中显示。比如,若获取到目标追踪对象在显示屏左边缘的预设区域中,则对应将显示屏当前显示的视角向左移动;若获取到目标追踪对象在显示屏上边缘的预设区域中,则对应将显示屏当前显示的视角向上移动。其中,视角移动或旋转的范围可预先进行设置,也可根据目标追踪对象的移动距离而定。
本实施例通过侦测显示屏边缘的预设区域内是否存在目标追踪对象,能够对目标追踪对象的移动情况进行有效判定,当目标追踪对象处于显示屏边缘时,调整显示屏当前显示的视角,能够保证目标追踪对象保持在当前显示屏中显示而不至于消失在显示屏中,提升了用户的观影体验。
进一步地,参照图8,图8为图6中追踪显示模块的另一细化功能模块示意图。基于上述图7所示的实施例,追踪显示模块30还可以包括:
计算单元33,用于计算所述目标追踪对象所占的像素空间大小;
设置单元34,用于根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
在本实施例中,由于目标追踪对象的侦测区域不宜设置得过小,也不宜设置得过大,设置过小会导致在预设区域内侦测不到目标追踪对象,设置过大会则会增加智能电视的运算量。为保证侦测区域范围设置的合理性,在侦测单元31根据生成的标识信息侦测当前显示屏边缘的预设区域内是否存在目标追踪对象之前,计算单元33可以首先计算目标追踪对象所占的像素空间大小,然后设置单元34根据计算得到的像素空间大小在所述显示屏边缘设置目标追踪对象的侦测区域。
参照图4,图4为本发明显示屏上边缘的预设区域内存在目标追踪对象的显示界面示意图。比如,计算单元33计算得到目标追踪对象所占的像素空间大小为60×80(水平方向长度为60像素,竖直方向高度为80像素),则设置单元34设置显示屏每个边缘的目标追踪对象的侦测区域在水平方向上的长度应不小于60像素,而在竖直方向的高度应不小于80像素,在此基础上侦测区域的大小可根据实际需要进行灵活设置。
本实施例根据目标追踪对象所占的像素空间大小设置目标追踪对象的侦测区域,能够保证侦测区域范围设置的合理性,从而能够降低智能电视的运算量,提高侦测效率。
进一步地,参照图9,图9为图6中生成模块的细化功能模块示意图。基于上述的实施例,生成模块20可以包括:
获取单元21,用于获取所述目标追踪对象中每个像素的颜色分量值;
颜色取反单元22,用于根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
累加单元23,用于将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
由于组成目标追踪对象的每个像素点都有其对应的颜色,且像素点的颜色其都是由红、绿、蓝三种颜色进行混合后生成的,因此,每个像素点的颜色可以用红、绿、蓝三种颜色分量表示。如(100,200,100)可表示组成该像素的红颜色分量为100,绿颜色分量为200,蓝颜色分量为100。
本实施例生成模块20按照预设规则生成目标追踪对象的标识信息的方式可以为:在用户选定目标追踪对象后,获取单元21获取目标追踪对象中每个像素的颜色分量值,颜色取反单元22根据获取到的颜色分量值对目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值,累加单元23再将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为目标追踪对象的标识。
比如,对于一个白色像素点(0,0,0),将其进行颜色取反,即用255分别减去每个颜色分量值,则得到一个黑色像素点(255,255,255)。若获取单元21获取到目标追踪对象中每个像素的颜色分量值分别为(100,200,100),(100,150,100),……,则颜色取反单元22将所述目标追踪对象进行颜色取反后得到每个像素取反后的颜色分量值,即(155,55,155),(155,105,155),……,累加单元23将每个像素取反后的颜色分量值按照颜色类型对应进行累加,即(155+155+…,55+105+…,155+155+…),若累加得到颜色分量值为(a,b,c),则将(a,b,c)作为目标追踪对象的标识。
对于某些目标追踪对象(比如汽车),其组成像素的颜色一般是不变的,因而通过上述计算方式得到的颜色分量值可以作为目标追踪对象的标识,在显示屏中的移动物体较多时,通过颜色分量值的匹配即可将目标追踪对象和其他移动物体快速区分开来。
本发明还提供一种计算机可读存储介质。
本发明计算机可读存储介质上存储有全景视频的目标追踪显示程序,所述全景视频的目标追踪显示程序被处理器执行时实现如下步骤:
根据用户的选择指令确定全景视频中的目标追踪对象;
按照预设规则生成所述目标追踪对象的标识信息;
根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
进一步地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
若是,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
进一步地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
计算所述目标追踪对象所占的像素空间大小;
根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
进一步地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
进一步地,所述全景视频的目标追踪显示程序被所述处理器执行时还实现如下步骤:
获取所述目标追踪对象中每个像素的颜色分量值;
根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
其中,在所述处理器上运行的全景视频的目标追踪显示程序被执行时所实现的方法可参照本发明全景视频的目标追踪显示方法各个实施例,此处不再赘述。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (15)

  1. 一种全景视频的目标追踪显示方法,其特征在于,所述方法包括如下步骤:
    根据用户的选择指令确定全景视频中的目标追踪对象;
    按照预设规则生成所述目标追踪对象的标识信息;
    根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
  2. 如权利要求1所述的全景视频的目标追踪显示方法,其特征在于,所述根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示的步骤包括:
    根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
    若是,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
  3. 如权利要求2所述的全景视频的目标追踪显示方法,其特征在于,所述根据生成的所述标识信息判断当前显示屏边缘的预设区域内是否存在所述目标追踪对象的步骤之前,还包括:
    计算所述目标追踪对象所占的像素空间大小;
    根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
  4. 如权利要求2所述的全景视频的目标追踪显示方法,其特征在于,所述按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示的步骤包括:
    获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
    根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
  5. 如权利要求1所述的全景视频的目标追踪显示方法,其特征在于,所述按照预设规则生成所述目标追踪对象的标识信息的步骤包括:
    获取所述目标追踪对象中每个像素的颜色分量值;
    根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
    将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
  6. 一种全景视频的目标追踪显示装置,其特征在于,所述装置包括:
    确定模块,用于根据用户的选择指令确定全景视频中的目标追踪对象;
    生成模块,用于按照预设规则生成所述目标追踪对象的标识信息;
    追踪显示模块,用于根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
  7. 如权利要求6所述的全景视频的目标追踪显示装置,其特征在于,所述追踪显示模块包括:
    侦测单元,用于根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
    调整单元,用于若当前显示屏边缘的预设区域内存在所述目标追踪对象,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
  8. 如权利要求7所述的全景视频的目标追踪显示装置,其特征在于,所述追踪显示模块还包括:
    计算单元,用于计算所述目标追踪对象所占的像素空间大小;
    设置单元,用于根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
  9. 如权利要求7所述的全景视频的目标追踪显示装置,其特征在于,所述调整单元还用于:
    获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
    根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
  10. 如权利要求6所述的全景视频的目标追踪显示装置,其特征在于,所述生成模块包括:
    获取单元,用于获取所述目标追踪对象中每个像素的颜色分量值;
    颜色取反单元,用于根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
    累加单元,用于将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有全景视频的目标追踪显示程序,所述全景视频的目标追踪显示程序被处理器执行时实现如下步骤:
    根据用户的选择指令确定全景视频中的目标追踪对象;
    按照预设规则生成所述目标追踪对象的标识信息;
    根据生成的所述标识信息在所述全景视频中追踪所述目标追踪对象,以使所述目标追踪对象保持在当前显示屏中显示。
  12. 如权利要求11所述的计算机可读存储介质,其特征在于,所述全景视频的目标追踪显示程序被处理器执行时还实现如下步骤:
    根据生成的所述标识信息侦测当前显示屏边缘的预设区域内是否存在所述目标追踪对象;
    若是,则按照预设规则调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
  13. 如权利要求12所述的计算机可读存储介质,其特征在于,所述全景视频的目标追踪显示程序被处理器执行时还实现如下步骤:
    计算所述目标追踪对象所占的像素空间大小;
    根据计算得到的所述像素空间大小在所述显示屏边缘设置所述目标追踪对象的侦测区域。
  14. 如权利要求12所述的计算机可读存储介质,其特征在于,所述全景视频的目标追踪显示程序被处理器执行时还实现如下步骤:
    获取所述目标追踪对象在所述显示屏边缘的预设区域中的位置信息;
    根据获取到的所述位置信息调整所述显示屏当前显示的视角,以使所述目标追踪对象保持在当前显示屏中显示。
  15. 如权利要求11所述的计算机可读存储介质,其特征在于,所述全景视频的目标追踪显示程序被处理器执行时还实现如下步骤:
    获取所述目标追踪对象中每个像素的颜色分量值;
    根据获取到的所述颜色分量值对所述目标追踪对象进行颜色取反,得到每个像素取反后的颜色分量值;
    将每个像素取反后的颜色分量值按照颜色类型对应进行累加,将累加后的颜色分量值作为所述目标追踪对象的标识。
PCT/CN2017/109937 2017-03-14 2017-11-08 全景视频的目标追踪显示方法、装置及存储介质 WO2018166224A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710153172.6 2017-03-14
CN201710153172.6A CN106961597B (zh) 2017-03-14 2017-03-14 全景视频的目标追踪显示方法及装置

Publications (1)

Publication Number Publication Date
WO2018166224A1 true WO2018166224A1 (zh) 2018-09-20

Family

ID=59470840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109937 WO2018166224A1 (zh) 2017-03-14 2017-11-08 全景视频的目标追踪显示方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN106961597B (zh)
WO (1) WO2018166224A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4090001A4 (en) * 2020-01-07 2023-05-17 Arashi Vision Inc. METHOD, DEVICE, DEVICE AND STORAGE MEDIA FOR PANORAMIC VIDEO CLIPS

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106961597B (zh) * 2017-03-14 2019-07-26 深圳Tcl新技术有限公司 全景视频的目标追踪显示方法及装置
CN107633241B (zh) * 2017-10-23 2020-11-27 三星电子(中国)研发中心 一种全景视频自动标注和追踪物体的方法和装置
CN109034000A (zh) * 2018-07-04 2018-12-18 广州视源电子科技股份有限公司 广告机屏幕运动的控制方法、装置、存储介质及广告机
CN111376832A (zh) * 2018-12-28 2020-07-07 奥迪股份公司 图像显示方法、装置、计算机设备和存储介质
CN110225402B (zh) * 2019-07-12 2022-03-04 青岛一舍科技有限公司 智能保持全景视频中兴趣目标时刻显示的方法及装置
CN110324641B (zh) * 2019-07-12 2021-09-03 青岛一舍科技有限公司 全景视频中保持兴趣目标时刻显示的方法及装置
CN111182218A (zh) * 2020-01-07 2020-05-19 影石创新科技股份有限公司 全景视频处理方法、装置、设备及存储介质
CN111413904B (zh) * 2020-04-02 2021-12-21 深圳创维-Rgb电子有限公司 一种显示场景的切换方法、智能显示屏及可读存储介质
CN112135046B (zh) * 2020-09-23 2022-06-28 维沃移动通信有限公司 视频拍摄方法、视频拍摄装置及电子设备
CN112788425A (zh) * 2020-12-28 2021-05-11 深圳Tcl新技术有限公司 动态区域显示方法、装置、设备及计算机可读存储介质
CN115396741A (zh) * 2022-07-29 2022-11-25 北京势也网络技术有限公司 全景视频的播放方法、装置、电子设备及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020024599A1 (en) * 2000-08-17 2002-02-28 Yoshio Fukuhara Moving object tracking apparatus
CN101477792A (zh) * 2009-01-21 2009-07-08 深圳华为通信技术有限公司 一种在背景画面上显示叠加图形的方法及显示装置
CN102843617A (zh) * 2012-09-26 2012-12-26 天津游奕科技有限公司 一种实现全景视频动态热点的方法
CN105847379A (zh) * 2016-04-14 2016-08-10 乐视控股(北京)有限公司 全景视频运动方向追踪方法及追踪装置
CN106303706A (zh) * 2016-08-31 2017-01-04 杭州当虹科技有限公司 基于人脸和物件跟踪实现以主角跟随视角观看虚拟现实视频的方法
CN106331732A (zh) * 2016-09-26 2017-01-11 北京疯景科技有限公司 生成、展现全景内容的方法及装置
CN106961597A (zh) * 2017-03-14 2017-07-18 深圳Tcl新技术有限公司 全景视频的目标追踪显示方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI514327B (zh) * 2013-06-26 2015-12-21 Univ Nat Taiwan Science Tech 目標偵測與追蹤方法及系統
CN105843541A (zh) * 2016-03-22 2016-08-10 乐视网信息技术(北京)股份有限公司 全景视频中的目标追踪显示方法和装置
CN106446002A (zh) * 2016-08-01 2017-02-22 三峡大学 一种基于运动目标在地图中轨迹的视频检索方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020024599A1 (en) * 2000-08-17 2002-02-28 Yoshio Fukuhara Moving object tracking apparatus
CN101477792A (zh) * 2009-01-21 2009-07-08 深圳华为通信技术有限公司 一种在背景画面上显示叠加图形的方法及显示装置
CN102843617A (zh) * 2012-09-26 2012-12-26 天津游奕科技有限公司 一种实现全景视频动态热点的方法
CN105847379A (zh) * 2016-04-14 2016-08-10 乐视控股(北京)有限公司 全景视频运动方向追踪方法及追踪装置
CN106303706A (zh) * 2016-08-31 2017-01-04 杭州当虹科技有限公司 基于人脸和物件跟踪实现以主角跟随视角观看虚拟现实视频的方法
CN106331732A (zh) * 2016-09-26 2017-01-11 北京疯景科技有限公司 生成、展现全景内容的方法及装置
CN106961597A (zh) * 2017-03-14 2017-07-18 深圳Tcl新技术有限公司 全景视频的目标追踪显示方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4090001A4 (en) * 2020-01-07 2023-05-17 Arashi Vision Inc. METHOD, DEVICE, DEVICE AND STORAGE MEDIA FOR PANORAMIC VIDEO CLIPS
JP7492012B2 (ja) 2020-01-07 2024-05-28 影石創新科技股▲ふん▼有限公司 パノラマビデオ編集方法、装置、機器及び記憶媒体

Also Published As

Publication number Publication date
CN106961597B (zh) 2019-07-26
CN106961597A (zh) 2017-07-18

Similar Documents

Publication Publication Date Title
WO2018166224A1 (zh) 全景视频的目标追踪显示方法、装置及存储介质
WO2020080765A1 (en) Apparatuses and methods for performing artificial intelligence encoding and artificial intelligence decoding on image
WO2019114269A1 (zh) 一种节目续播方法、电视设备及计算机可读存储介质
WO2017107388A1 (zh) Hdmi版本切换方法及显示设备
WO2015142016A1 (ko) 컨텐츠의 재생 제어 방법 및 이를 수행하기 위한 컨텐츠 재생 장치
WO2017181504A1 (zh) 智能调节字幕大小的方法及电视机
WO2017113614A1 (zh) 视频播放过程中插播广告的方法及装置
WO2017084302A1 (zh) 显示终端开机播放视频的方法及显示终端
WO2017215116A1 (zh) 智能电视的拍照方法及系统
WO2018223607A1 (zh) 电视终端及hdr图像转为sdr的方法和计算机可读存储介质
WO2018023925A1 (zh) 拍摄方法及系统
WO2018227869A1 (zh) 显示屏画面调整方法、显示终端及可读存储介质
WO2017206368A1 (zh) 高动态范围画面切换方法及装置
WO2018032680A1 (zh) 音视频播放方法及系统
WO2013166796A1 (zh) 自动识别3d视频播放模式的方法和装置
WO2018233221A1 (zh) 多窗口声音输出方法、电视机以及计算机可读存储介质
WO2018032679A1 (zh) 电视定时开关机的设置方法和装置
WO2017121066A1 (zh) 应用程序显示方法和系统
WO2018094812A1 (zh) 调节液晶显示装置屏幕背光亮度的方法及装置
WO2018223602A1 (zh) 显示终端、画面对比度提高方法及计算机可读存储介质
WO2019051903A1 (zh) 终端控制方法、装置及计算机可读存储介质
WO2018018680A1 (zh) 应用提示信息显示方法及装置
WO2017113596A1 (zh) 单独听控制方法及系统、移动终端及智能电视
WO2019091128A1 (zh) 新接入信源的信号预览方法及电视机
WO2017084297A1 (zh) 多媒体文件的播放方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17900651

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17900651

Country of ref document: EP

Kind code of ref document: A1