WO2017210908A1 - 处理方法与终端 - Google Patents
处理方法与终端 Download PDFInfo
- Publication number
- WO2017210908A1 WO2017210908A1 PCT/CN2016/085364 CN2016085364W WO2017210908A1 WO 2017210908 A1 WO2017210908 A1 WO 2017210908A1 CN 2016085364 W CN2016085364 W CN 2016085364W WO 2017210908 A1 WO2017210908 A1 WO 2017210908A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- feature
- pictures
- terminal
- feature point
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 5
- 238000000034 method Methods 0.000 claims abstract description 79
- 230000009466 transformation Effects 0.000 claims description 26
- 238000004891 communication Methods 0.000 claims description 25
- 239000011159 matrix material Substances 0.000 claims description 23
- 239000000284 extract Substances 0.000 claims description 16
- 238000005259 measurement Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 9
- 230000001131 transforming effect Effects 0.000 claims description 8
- 230000001960 triggered effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 16
- 238000010586 diagram Methods 0.000 description 37
- 230000006870 function Effects 0.000 description 13
- 238000013461 design Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
- H04N21/8153—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
Definitions
- Embodiments of the present invention relate to a wireless communication technology, and in particular, to a processing method and a terminal.
- terminals with camera functions are more and more widely used in life.
- the picture taken by the terminal is unclear due to the reason of the terminal itself or external reasons.
- the pixel of the terminal (Pixel Per Inch, PPI) is limited, and the partial area of the captured picture is not clear; for example, the terminal is far away from the object to be photographed, and the captured picture is not clear.
- the present invention provides a processing method and a terminal, where the terminal acquires feature information of the target content of the first picture, and acquires, according to the feature information, one or more matching degrees with the first picture are greater than a preset threshold, and the resolution is greater than the first picture. The second picture, then the second picture is displayed.
- a method for displaying a picture for a terminal having a display screen. The method is described from the perspective of a terminal.
- the terminal displays a first picture on the display screen, and when the terminal detects that the function is applied to the terminal a preset gesture of the first picture, in response to the preset gesture, triggering an event of: determining a target content of the first picture according to the preset gesture; acquiring feature information of the target content; acquiring one or more second pictures according to the feature information, where The degree of matching between the second picture and the first picture is greater than a preset threshold, and the resolution of the second picture is greater than the resolution of the first picture; and at least one of the one or more second pictures is displayed.
- the feature information includes: a feature descriptor, the terminal extracts a feature point from the target content, obtains a first feature point set, and each feature in the first feature point set The point generates a feature descriptor.
- the terminal locally extracts feature points from the target content to obtain a feature descriptor.
- the matching degree of the second picture with the first picture is greater than a preset threshold, including: in a feature point of the second picture, and in the first feature point set
- the feature point matching feature points constitute a second feature point set, and the ratio of the number of feature points in the second feature point set to the number of feature points in the first feature point set is greater than the preset threshold;
- the distances of the feature descriptors corresponding to the two feature points that match each other are smaller than the first threshold.
- the clarity of the second picture is greater than the definition of the first picture, including:
- the feature points matching the feature points in the first feature point set constitute a second feature point set, and each feature point in the second feature point set is away from the center point
- the average distance is greater than a second threshold
- the center point is an average value of coordinates of each feature point in the second feature point set
- the distance between each feature point and the center point is a feature point and a pixel of the center point
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set.
- the preset gesture is a zoom-in gesture
- determining the target content of the first picture according to the preset gesture includes: using a display content of the display screen as the target content.
- the preset gesture is a circled gesture
- determining the target content of the first picture according to the preset gesture includes: using the content defined by the circled gesture as the target content.
- the feature information further includes at least one of global positioning system GPS information and inertial measurement unit IMU information.
- the displaying the at least one of the one or more second pictures comprises: displaying a picture with the highest definition among the one or more second pictures; or
- the method also includes detecting another preset gesture acting on the display;
- another second picture is displayed, the other second picture having a sharpness less than the highest resolution picture.
- the method further includes: redisplaying the second picture with the highest definition.
- the method before the displaying the at least one second picture in the one or more second pictures, the method further includes: determining between the at least one second picture and the first picture Transformation matrix
- the displaying the at least one of the one or more second pictures comprises:
- At least one second picture that has been transformed is displayed.
- acquiring one or more second pictures according to the feature information includes:
- the acquiring the one or more second pictures according to the feature information includes: acquiring one or more second pictures from a memory of the terminal according to the feature information; When the second picture is not acquired from the memory, the feature information is sent to the server, and the one or more second pictures sent by the server according to the feature information are received.
- the second aspect provides a method for displaying a picture, which is applicable to a server that stores a picture.
- the server receives feature information of the target content sent by the terminal, where the target content is determined by the terminal according to the preset gesture from the first picture.
- the first picture is displayed on the display screen of the terminal; the one or more second pictures are acquired according to the feature information, the matching degree of the second picture with the first picture is greater than a preset threshold, and the resolution of the second picture is greater than that of the first picture.
- Sharpness transmitting one or more second pictures to the terminal, the trigger terminal displaying at least one of the one or more second pictures.
- the feature information includes: a feature descriptor, where the feature extracts feature points of the target content to obtain a first feature point set, and the first feature Each feature point in the point set is described by the child.
- the feature information includes: the target content, where the acquiring the one or more second pictures according to the feature information, the method further includes:
- the clarity of the second picture is greater than the definition of the first picture, including:
- the feature points matching the feature points in the first feature point set constitute a second feature point set, and each feature point in the second feature point set is away from the center point
- the average distance is greater than the second threshold
- the center point is an average value of coordinates of each feature point in the second feature point set, and the distance between each feature point and the center point is the feature point and the center point
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set.
- the method before the acquiring one or more second pictures according to the feature information, the method further includes:
- the acquiring the one or more second pictures according to the feature information includes:
- a third aspect provides a method for a terminal having a display screen, comprising:
- a magnifying gesture acting on a first picture is detected, the picture being displayed on the display On the screen
- the fourth aspect provides a terminal, including:
- a processor configured to detect a preset gesture that is applied to the first picture, and trigger an event according to the preset gesture: determining a target content of the first picture according to the preset gesture, acquiring the target content Feature information, acquiring one or more second pictures according to the feature information, a matching degree of the second picture and the first picture is greater than a preset threshold, and a resolution of the second picture is greater than the first The clarity of a picture;
- the display is further configured to display at least one of the one or more second pictures.
- the feature information includes: a feature descriptor
- the acquiring feature information of the target content of the processor includes:
- the processor extracts feature points from the target content, obtains a first feature point set, and generates a feature descriptor for each feature point in the first feature point set.
- the clarity of the second picture is greater than the definition of the first picture, including:
- the feature points matching the feature points in the first feature point set constitute a second feature point set, and each feature point in the second feature point set is away from the center point
- the average distance is greater than the second threshold
- the center point is an average value of coordinates of each feature point in the second feature point set, and the distance between each feature point and the center point is a feature point and the center point
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set.
- the preset gesture is a zoom-in gesture
- the processor has The body is used to use the display content of the display screen as the target content.
- the preset gesture is a circled gesture
- the processor is specifically configured to use the content defined by the circled gesture as the target content.
- the feature information further includes at least one of global positioning system GPS information and inertial measurement unit IMU information.
- the display is further configured to display at least one of the one or more second pictures, including:
- the display is further configured to display a picture with the highest definition in the one or more second pictures;
- the processor when the display specifically displays the highest-definition picture in the one or more second pictures, the processor is further configured to detect another function acting on the display. Preset gestures;
- the display In response to the another preset gesture, such that the display displays another second picture, the other second picture having a sharpness less than the picture with the highest resolution.
- the processor is further configured to determine the at least one second picture and the first before the display displays the at least one second picture in the one or more second pictures a transformation matrix between pictures; transforming the at least one second picture according to the transformation matrix;
- the display is specifically configured to display the transformed at least one second picture.
- the processor is configured to acquire one or more second pictures according to the feature information, including:
- the processor is configured to acquire one or more second pictures from a memory of the terminal according to the feature information
- the transceiver further includes: when the processor does not acquire the second picture from the memory of the terminal, sending the feature information to a server, and receiving the server according to the One or more second pictures transmitted by the feature information.
- a fifth aspect provides a terminal, including: a processor, a memory, a communication interface, a system bus, and a display, wherein the memory and the communication interface are connected to the processor through the system bus and complete communication with each other
- the memory is configured to store computer execution instructions
- the communication interface is for communicating with other devices
- the processor is configured to execute the computer to execute instructions to cause the terminal to perform any of the first aspect or the first aspect
- a sixth aspect provides a server, including:
- a transceiver configured to receive feature information of the target content sent by the terminal, where the target content is determined by the terminal according to a preset gesture from the first image, where the first image is displayed on a display screen of the terminal;
- a processor configured to acquire one or more second pictures according to the feature information, a matching degree of the second picture with the first picture is greater than a preset threshold, and a resolution of the second picture is greater than the The clarity of the first picture;
- the transceiver is further configured to send the one or more second pictures to the terminal, so that the terminal displays at least one of the one or more second pictures.
- the feature information includes: a feature descriptor, where the feature extracts feature points of the target content to obtain a first feature point set, and the first feature Each feature point in the point set is described by the child.
- the feature information includes: the target content, the processor is further configured to extract a feature point from the target content, obtain a first feature point set, and A feature descriptor is generated for each feature point in the feature point set.
- the feature points that match the feature points in the first feature point set constitute a second feature point set, where the second feature point set is in the set
- the second feature point set is in the set
- the center point is an average of the coordinates of each feature point in the second feature point set, and each feature point and the center point The distance is the number of pixels of the feature point and the center point;
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set.
- the transceiver is configured according to the special Before acquiring the one or more second pictures, the information is further used to receive at least one of global positioning system GPS information and inertial measurement unit IMU information of the first picture sent by the terminal;
- the processor is further configured to determine, according to the GPS information, a first set from the stored pictures, and determine, according to the IMU information, a second set, the pictures in the first set from the stored pictures. Having the same photographing position information as the first picture, the pictures in the second set having the same photographing orientation information as the first picture; according to the feature information, from the first set and the first In the two sets, one or more second pictures are acquired.
- a seventh aspect provides a server, comprising: a processor, a memory, a communication interface, and a system bus, wherein the memory and the communication interface are connected to the processor through the system bus and complete communication with each other, the memory Means for storing computer execution instructions for communicating with other devices, the processor for running the computer to execute instructions to cause the server to perform any of the possible implementations of the second aspect or the second aspect The method provided by the method.
- the eighth aspect provides a terminal having a function of implementing terminal behavior in the design of the above method.
- the functions may be implemented by hardware or by corresponding software implemented by hardware.
- the hardware or software includes one or more units corresponding to the functions described above.
- the structure of the terminal includes a processor and a transmitter, the processor being configured to support the first terminal to perform a corresponding function in the above method.
- the transmitter is configured to support communication between the terminal and the terminal, and send information or instructions involved in the foregoing method to the terminal.
- the terminal can also include a memory for coupling with the processor that retains program instructions and data necessary for the terminal.
- the ninth aspect provides a server having a function of implementing server behavior in the above method design.
- the functions may be implemented by hardware or by corresponding software implemented by hardware.
- the hardware or software includes one or more units corresponding to the functions described above.
- the module can be software and/or hardware
- the structure of the server includes a receiver and a processor configured to support the server to perform the corresponding functions in the above methods.
- the transmitter is configured to support communication between the server and the base station, and receive information or instructions involved in the foregoing method sent by the base station.
- the server may also include a memory for coupling with the processor, which stores the base station Necessary program instructions and data.
- a tenth aspect provides a communication system comprising the server and terminal of the above aspect.
- An eleventh aspect provides a computer storage medium for storing computer executable instructions for use by the terminal, the computer executable instructions comprising for performing the first aspect or the first aspect or the third aspect or the third aspect Any of the possible implementations of any of the possible implementations of the methods provided by the instructions.
- a twelfth aspect provides a computer storage medium for storing computer executable instructions for use by the server, the computer executable instructions comprising any of the possible implementations of the second aspect or the second aspect The instructions of the method.
- a thirteenth aspect provides a chip system comprising: at least one processor, a memory, an input/output portion, and a bus; and the at least one processor acquires an instruction in the memory through the bus for implementing the above method It involves the design function of the terminal.
- a fourteenth aspect provides a chip system comprising: at least one processor, a memory, an input/output portion, and a bus; and the at least one processor acquires an instruction in the memory through the bus for implementing the above method It involves the design features of the server.
- a fifteenth aspect provides a computer readable storage medium storing one or more programs, the terminal performing the steps of the method as applied to the terminal as described above when the one or more programs are executed by the terminal.
- a sixteenth aspect provides a graphical user interface on a terminal, the terminal comprising a display, a memory, a plurality of applications; and one or more processes for executing one or more programs stored in the memory
- the graphical user interface includes a user interface as described above for application to the terminal.
- the terminal acquires feature information of the target content of the first picture, and acquires one or more second pictures whose matching degree with the first picture is greater than a preset threshold and whose resolution is greater than the first picture according to the feature information, and then displays Second picture.
- the terminal acquires feature information of the target content of the first picture, and acquires one or more second pictures whose matching degree with the first picture is greater than a preset threshold and whose resolution is greater than the first picture, according to the feature information, and then The second picture is displayed to provide the user with a higher definition picture.
- FIG. 1 is a schematic diagram of a system applicable to a method for displaying a picture according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a method for displaying a picture according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of another method for displaying a picture according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of another method for displaying a picture according to an embodiment of the present invention.
- FIG. 5A is a schematic diagram of a first picture according to an embodiment of the present invention.
- FIG. 5B is a schematic diagram of a second picture according to an embodiment of the present invention.
- FIG. 5C is a schematic diagram of switching a second picture according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram of determining an average distance according to an embodiment of the present invention.
- FIG. 7A is a schematic diagram of another first picture according to an embodiment of the present invention.
- FIG. 7B is a schematic diagram of another second picture according to an embodiment of the present invention.
- FIG. 7C is a schematic diagram of a picture obtained by transforming a second picture according to an embodiment of the present invention.
- FIG. 8A is a flowchart of requesting a next second picture according to an embodiment of the present invention.
- FIG. 8B is a flowchart of requesting to return a previous second picture according to an embodiment of the present invention.
- FIG. 9 is a schematic diagram of another method for displaying a picture according to an embodiment of the present disclosure.
- FIG. 10A is a schematic diagram of stopping scroll display according to an embodiment of the present invention.
- FIG. 10B is a schematic diagram of another stop scroll display according to an embodiment of the present invention.
- FIG. 11 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
- FIG. 12 is a schematic structural diagram of another terminal according to an embodiment of the present disclosure.
- FIG. 13 is a schematic structural diagram of a server according to an embodiment of the present disclosure.
- FIG. 14 is a schematic structural diagram of another server according to an embodiment of the present invention.
- the target content is continuously enlarged, and the target content is viewed within the maximum magnification range supported by the terminal.
- an embodiment of the present invention provides a method and a terminal for displaying a picture, and the terminal acquires Feature information of the target content of the first picture, and acquiring one or more second pictures whose matching degree with the first picture is greater than a preset threshold and whose definition is greater than the first picture according to the feature information, and then displaying the second picture.
- the terminal when the terminal acquires one or more and the first picture according to the feature information, the terminal may obtain the image from the local storage. In addition, the terminal may also send the feature information to the server, and the server may The second picture is obtained in the picture stored on the server. At this time, in addition to the basic input and output module, the terminal needs to have the ability to connect to the server, and the connection mode can be Bluetooth, Wi-Fi, Internet, and the like.
- the connection mode can be Bluetooth, Wi-Fi, Internet, and the like.
- FIG. 1 is a schematic diagram of a system applicable to a method for displaying a picture according to an embodiment of the present invention. As shown in FIG. 1 , in the embodiment of the present invention, a terminal and/or a server are involved.
- the terminal involved in the embodiment of the present invention may be a terminal having a display screen, etc., wherein the terminal may be a device that provides voice and/or data connectivity to the user, a handheld device with wireless connection function, or a connection. Other processing devices to the wireless modem.
- the terminal can communicate with one or more core networks via a radio access network (eg, RAN, Radio Access Network), and the terminal can be a mobile device, such as a mobile phone (or "cellular" phone) and a computer with a mobile terminal.
- a radio access network eg, RAN, Radio Access Network
- the terminal can be a mobile device, such as a mobile phone (or "cellular" phone) and a computer with a mobile terminal.
- it may be a portable, pocket, handheld, computer built-in or in-vehicle mobile device that exchanges language and/or data with a wireless access network.
- FIG. 2 is a schematic diagram of a method for displaying a picture according to an embodiment of the present invention.
- the embodiment of the present invention is applicable to a scenario that requires a terminal to display a second picture that matches the target content and has a higher definition than the target content.
- the embodiment of the present invention includes the following steps:
- the terminal displays a first picture on its display screen
- the terminal detects a preset gesture that acts on the first picture, and triggers the following event in response to the preset gesture;
- the terminal determines, according to the preset gesture, target content of the first picture.
- the first picture is operated, so that the terminal determines the target content from the first picture.
- the preset gesture is a zoom-in gesture
- the target content is determined from the first image according to the zoom-in gesture; for example, the preset hand
- the trend is a circle gesture
- the area circled by the gesture is used as the target content.
- Different preset gestures can be understood as different trigger conditions.
- the terminal acquires feature information of the target content.
- the terminal acquires one or more second pictures according to the feature information.
- the terminal after determining the target content, acquires the feature information of the target content, and matches the second image according to the feature information, where the matching degree of the second image and the first image is greater than a preset threshold, and The clarity of the second picture is higher than the clarity of the second picture.
- the feature descriptors of the two feature points For two feature points in different pictures, by calculating the feature descriptors of the two feature points, if the two feature descriptors have similarities, for example, the Euclidean distance between the two feature descriptors is smaller than the first For the threshold, the corresponding feature points of the two feature descriptors are considered to match. In this way, tracking of feature points between different pictures can be achieved. In addition, feature point tracking can also be understood as feature point matching.
- the terminal matches at least one second picture for the first picture from the locally stored picture.
- the second picture includes the target content in the first picture, the matching degree of the second picture with the first picture is greater than a preset threshold, and the resolution of the second picture is higher than the definition of the first picture.
- the terminal extracts feature points for the to-be-matched picture, and among the feature points, the feature points matching the feature points in the first feature point set form a second feature.
- a point set if the ratio of the number of feature points in the second feature point set to the number of feature points in the first feature point set is greater than a preset threshold, further determining an average of each feature point in the second feature point set from the center point If the distance is greater than the second threshold, if the distance is greater than, the picture to be matched is considered to be the second picture.
- the center point is an average value of coordinates of each feature point, and the distance between each feature point and the center point is the number of pixels between the feature point and the center point.
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set, or a value of a preset value.
- the total number of feature points in the first feature point set is 10, and for a certain picture to be matched in the picture stored by the terminal, the terminal extracts feature points for the to-be-matched picture, and among the feature points, the first feature point set
- the feature points in the feature point matching form a second feature point set, assuming that the second feature point set includes 8 feature points, the number of feature points in the second feature point set and the number of feature points in the first feature point set
- the ratio is 80%, if the preset threshold is less than 80%, Then, the matching degree between the to-be-matched picture and the first picture is greater than a preset threshold.
- the picture to be matched does not match the first picture, and no further Determining whether an average distance of each feature point in the second feature point set from the center point is greater than a second threshold.
- the ratio of the number of feature points in the second feature point set to the number of feature points in the first feature point set is greater than a preset threshold, if the feature points of the second feature point set are away from the center point It is not greater than the second threshold, that is, if the resolution of the to-be-matched picture is not higher than the second threshold, it indicates that the to-be-matched picture cannot be used as the second picture.
- the terminal displays the second picture.
- the terminal After obtaining the one or more second pictures, the terminal displays at least one second picture. For example, displaying the most sharpest picture in the one or more second pictures; for example, displaying one or more second pictures in all; for example, scrolling displaying at least one of the one or more second pictures.
- the terminal acquires feature information of the target content of the first picture, and acquires, according to the feature information, one or more matching degrees with the first picture is greater than a preset threshold, and the resolution is greater than the first
- the terminal matches at least one second picture for the first picture from the locally stored picture.
- the terminal may also send the feature information to the server, and the server matches the second picture for the first picture.
- the terminal may first match the second picture in the memory of the terminal, and if the one or more second pictures are matched, display at least one of the one or more second pictures. If the second picture is not matched, the terminal sends the feature information to the server, and receives one or more second pictures that the server matches.
- FIG. 3 is a schematic diagram of another method for displaying a picture according to an embodiment of the present invention, including:
- the terminal determines the target content from the first picture.
- the terminal After determining the target content, the terminal sends the feature information to the server, so that the server is based on the server.
- the feature information matches the second picture with the first picture.
- the feature information may be sent by mode A or mode B.
- Mode A includes:
- the terminal extracts feature points from the target content to obtain a first feature point set, and generates a feature descriptor for each feature point in the first feature point set.
- the terminal sends a feature descriptor to the server.
- Mode B includes:
- the terminal sends the target content to the server.
- the server extracts feature points from the target content to obtain a first feature point set, and generates a feature descriptor for each feature point in the first feature point set.
- the feature points in the picture refer to points which are located in the region where the gradation is sharply changed and which are easier to distinguish from the surrounding pixel points and which are easy to detect.
- Feature descriptors include Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), and Histogram of Oriented Gradients (HOG).
- SIFT Scale Invariant Feature Transform
- SURF Speeded Up Robust Features
- HOG Histogram of Oriented Gradients
- the characterization descriptor is usually an n-dimensional vector. When the feature descriptor is generated, a gradient-weighted gradient direction histogram constructed from a certain range of pixel points around the feature point is generated.
- the server matches the first picture to the second picture according to the feature information.
- the server sends a second picture to the terminal.
- the terminal displays the second picture.
- FIG. 4 is a schematic diagram of another method for displaying a picture according to an embodiment of the present invention.
- the preset gesture is specifically an enlarged gesture.
- the embodiment of the present invention includes the following steps:
- the terminal detects an amplification gesture applied to the first picture.
- the user enlarges the first picture by the zoom gesture, for example, enlarges the first picture by a zoom gesture such as two-finger expansion.
- the terminal responds to the zoom-in gesture to enlarge the first picture.
- the terminal determines that the magnification of the first picture reaches or exceeds the third threshold. If the zoom-in gesture applied to the first picture is continuously detected, the area displayed on the display screen in the first picture is used as the target content.
- the terminal determines that the user continues to enlarge the first picture, and then displays the area displayed on the display screen as the target content of the user in the first picture.
- the target content may also be referred to as a visible area. Specifically, see FIG. 5A.
- FIG. 5A is a schematic diagram of a first picture according to an embodiment of the present invention.
- the user enlarges the first picture to the maximum multiple supported by the terminal, but still cannot see other words except "Hello” and "World”.
- the terminal thinks that the user still cannot see the first picture.
- the user sets the area displayed on the display screen in the first picture as the target content.
- the terminal extracts feature points from the target content to obtain a first feature point set, and generates a feature descriptor for each feature point in the first feature point set.
- the terminal sends a feature descriptor to the server.
- GPS Global Positioning System
- IMU Inertial Measurement Unit
- the server determines, according to the feature description, whether there is a picture that the matching degree with the first picture is greater than a preset threshold, if yes, step 307 is performed; otherwise, step 306 is performed.
- the server determines, according to the feature description in the stored picture, whether there is a picture whose matching degree with the first picture is greater than a preset threshold, and the picture with the matching degree of the first picture being greater than the preset threshold is used as the matching picture.
- the picture whose matching degree is greater than the preset threshold refers to: in the feature points of the picture, the feature points that match the feature points in the first feature point set constitute a second feature point set, and the feature points in the second feature point set The ratio of the number to the number of feature points in the first set of feature points is greater than the first threshold.
- the terminal sends the GPS information and the IMU information of the first picture to the server
- the server further determines the first set from the stored pictures according to the GPS information. And/or, according to the IMU information, determining a second set from the stored pictures, where the pictures in the first set have the same shooting location information as the first picture, and the pictures in the second set The same shooting direction information as the first picture. Then, according to the feature descriptor, the second picture is matched to the first picture from the first set and/or the second set.
- the server does not perform feature point extraction on all the stored pictures, but selects a part of the pictures from the stored pictures, and only extracts feature points of the part of the pictures and matches the feature points of the first picture. In this way, the server can narrow down the search for the second image, thereby reducing the search time.
- the server sends a matching failure message to the terminal.
- the server determines, from the matching picture, a picture whose resolution is higher than a second threshold.
- the picture whose resolution is higher than the second threshold determined from the matching picture is the second picture.
- the server determines, from each of the pictures that have a matching degree with the first picture that is greater than the preset threshold, that the average distance of each feature point in the second feature point set from the center point is greater than the second threshold, and the matching pictures are used as the first Two pictures.
- FIG. 6 is a schematic diagram of determining an average distance according to an embodiment of the present invention.
- the second feature point set formed by the feature points matching the feature points in the first feature point set includes four feature points.
- the four feature points are shown by a hollow idle circle in the figure, and the center point of the feature point set is as shown by a solid circle in the figure.
- the coordinates of the four feature points are P1(x1, y1), P2(x2, y2), P3(x3, y3), P4(x4, y4)
- the step is an optional step, that is, the server may not sort at least one matching picture, but use all matching pictures as the second picture, send all the second pictures to the terminal, or randomly. Send one of them to the terminal.
- a special second picture may also be sent to the terminal, for example, the second picture with the highest definition and the like.
- the server sends a second picture to the terminal.
- the server sends the second picture to the terminal.
- the location coordinates and the like of each feature point in the second feature point set of the second picture are simultaneously sent to the terminal.
- the server may only send a second picture to the terminal.
- the second picture is the picture with the highest resolution among all the second pictures; or the server may send all or part of the second picture to the terminal.
- the server may send N second pictures to the terminal in this step.
- the second picture with the largest average distance that is, the second picture with the highest resolution.
- step 309 The terminal determines whether the matching failure message or the second picture sent by the server is received. If the matching failure message is sent, step 310 is performed; otherwise, step 311 is performed.
- the terminal does not perform any operation.
- the terminal determines a transformation matrix between the second picture and the first picture.
- the terminal determines a transformation matrix between the second picture and the first picture; and transforms the second picture according to a transformation matrix.
- the terminal is based on feature points in the first picture and corresponding features in the second picture.
- Point calculating a transformation matrix between the feature point sets corresponding to the two feature points
- the transformation matrix is, for example, a 3 ⁇ 3 matrix representation
- the transformation matrix between the feature point sets corresponding to the two feature points respectively is A transformation between the second picture and the first picture.
- a linear algorithm such as Direct Linear Transformation (DLT) is used to calculate an initial value, and then further optimized by using a nonlinear algorithm.
- the nonlinear algorithm includes Gauss-Newton method, Gradient Descent method, L-M (Levenberg-Marquardt) algorithm and the like.
- the terminal transforms the second picture according to the transformation matrix.
- the gray value of each point on the transformed second picture can be obtained by inverse interpolation or the like.
- Transforming the second picture can eliminate the hopping caused by the second picture and the picture currently displayed on the display screen of the terminal, that is, the rotation, translation, zoom, and the like existing between the first pictures.
- the terminal displays the transformed second picture.
- the terminal replaces the currently displayed first picture with the transformed second picture. Specifically, see FIG. 5B.
- FIG. 5B is a schematic diagram of a second picture according to an embodiment of the present invention. As shown in FIG. 5B, in the original picture which cannot be clearly displayed in FIG. 5A, a second picture with high definition can be found from the server.
- the trigger server searches for the high picture matching the first picture.
- the second picture of the sharpness is returned to the terminal such that the terminal replaces the first picture with the second picture and displays, thereby providing the user with a higher definition picture.
- the terminal transforms the second picture according to the transformation.
- the above transformation will be described in detail below. For details, refer to FIG. 7A to FIG. 7C.
- FIG. 7A is a schematic diagram of another first picture according to an embodiment of the present invention.
- the first picture is originally displayed on the display screen of the terminal, and the user wants to see the word on the gum can in the first picture.
- the area displayed on the display screen of the terminal is the target content of the user.
- the terminal extracts the feature points of the target content to obtain the first
- the feature point set generates a feature descriptor for each feature point in the first feature point set, and the server matches the second picture from the stored picture to the first picture. Specifically, see FIG. 7B.
- FIG. 7B is a schematic diagram of another second picture according to an embodiment of the present invention.
- the second picture returned from the server has high similarity with the first picture, that is, pictures matching each other, and the resolution of the second picture is higher than the definition of the target content on the first picture. Since the area of the display screen is limited, if the second picture is not transformed, the area in the second picture displayed on the display screen is the area that best matches the target content in the first picture. At this time, the user needs to operate on the screen of the terminal to display the area of the second picture that the user is really interested in on the display screen. In order to avoid the trouble caused by user operations, the second picture needs to be transformed. Specifically, see Figure 7C.
- FIG. 7C is a schematic diagram of a picture obtained by transforming a second picture according to an embodiment of the present invention.
- the user target content in the high-definition picture returned from the server is translated, rotated, and expanded into the display range of the terminal.
- the above steps in FIG. 4 can be understood as a process of replacing the first picture on the terminal with the second picture in the server.
- the server only returns a second picture to the terminal.
- the user may request another second picture according to the requirement, that is, another preset gesture issued by the user.
- FIG. 5C, FIG. 8A and FIG. 8B how the terminal displays the next or previous picture for explanation according to the user's needs.
- FIG. 5C is a schematic diagram of switching a second picture according to an embodiment of the present invention.
- the user can use the button of the terminal, such as pressing the right button to obtain the second picture with the lower resolution lower than the previous picture, the specific process can be seen in FIG. 8A; press the left button to return For the picture displayed last time, the specific process can be seen in Figure 8B.
- the right button represents a button to the right, and for the touch screen, the button may be swiped to the left; the left button indicates a left button, and for the touch screen, the button may be swiped to the right.
- pressing the right button, pressing the left button, sliding to the right, and sliding to the left can be understood as another preset gesture.
- FIG. 8A is a flowchart of requesting a next second picture according to an embodiment of the present invention.
- Embodiments of the present invention include the following steps:
- the terminal determines whether the user presses the right button. If not, performs step 402; otherwise, performs step 403.
- the slice has a lower definition than the second picture returned to the terminal before the server.
- the terminal does not perform any processing.
- the terminal sends a request message for accessing the next second picture to the server.
- the server returns a next second picture to the terminal.
- the second picture returned by the server to the terminal is lower than the clarity of the second picture returned by the server to the terminal.
- the position coordinates of the respective feature points in the second feature point set of the second picture with lower definition are simultaneously transmitted to the terminal.
- the terminal determines a transformation matrix between the second picture and the first picture.
- the second picture can be understood as the second picture returned by the server to the terminal after receiving the application for displaying the next picture sent by the terminal.
- the terminal transforms the second picture according to the transformation matrix.
- the terminal displays the transformed second picture.
- FIG. 8B is a flowchart of requesting to return a previous second picture according to an embodiment of the present invention.
- Embodiments of the present invention include the following steps:
- the terminal determines whether the user presses the left button, if not, proceeds to step 502; otherwise, step 503;
- the terminal does not perform any processing.
- the terminal displays the previous second picture.
- the preset gesture is specifically an enlarged gesture.
- the preset gesture may also be a circle gesture. Specifically, refer to FIG. 8.
- FIG. 9 is a schematic diagram of another method for displaying a picture according to an embodiment of the present invention.
- the preset gesture is specifically a circle gesture.
- Embodiments of the present invention include the following steps:
- the terminal detects a circle gesture applied to the first picture, and uses an area circled by the circle gesture as the target content.
- the user circled a certain area from the first picture by a circle gesture.
- the terminal responds to the circled gesture, so that the area circled by the circled gesture in the first picture is used as the target content.
- the circle gesture refers to drawing a closed circle or a non-closed circle on the display screen.
- the terminal obtains a closure according to the non-closed circle. Circle and close the circle as the target content.
- This step can be understood as a process in which the user circles the unclear area through the input/output module of the terminal.
- step 304 of FIG. 3 For details, refer to step 304 of FIG.
- the server determines, according to the feature description, whether there is a picture that the matching degree with the first picture is greater than a preset threshold, if yes, step 606 is performed; otherwise, step 605 is performed.
- the server sends a matching failure message to the terminal.
- the server determines, from the matching picture, a picture whose resolution is higher than a second threshold.
- the terminal determines whether the matching failure message or the second picture sent by the server is received. If the matching failure message is sent, step 609 is performed; otherwise, step 610 is performed.
- the terminal does not perform any operation.
- the terminal determines a transformation matrix between the second picture and the first picture.
- the second picture can be understood as the second picture to be displayed by the terminal.
- the terminal transforms the second picture according to the transformation matrix.
- the terminal displays the transformed second picture.
- the terminal when the user selects an unclear area in the first picture, uses the selected area as the target content, and the trigger server searches for the matching image from the stored picture.
- the second picture of high definition is returned to the terminal, so that the terminal replaces the first picture with the second picture and displays it, thereby providing the user with a higher definition picture.
- the second picture is at least one.
- the terminal When returning the second picture, if only one second picture is returned, the terminal only displays the second picture; if multiple second pictures are returned, the terminal scrolls and displays each of the at least one second picture Second picture, or The terminal may also display the at least one second picture at the same time.
- FIG. 10A is a schematic diagram of a stop scroll display according to an embodiment of the present invention
- FIG. 10B is a schematic diagram of another stop scroll display according to an embodiment of the present invention.
- the second picture scrolled on the display screen includes picture 1, picture 2 and picture 3. If the user presses a button, for example, pressing the home button, the scrolling is stopped; as shown in FIG. 10B, If the user presses an area, such as the area where the picture 3 is located, the scrolling is stopped and the picture 3 is enlarged.
- the terminal simultaneously displays the plurality of second pictures, if the second operation of one of the at least one second picture is detected, the picture corresponding to the second operation is enlarged.
- FIG. 11 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
- the terminal provided by the embodiment of the present invention can implement various steps of the method applied to the terminal provided by any embodiment of the present invention.
- the terminal provided by the embodiment of the present invention includes:
- the processor 12 is configured to detect a preset gesture that is applied to the first picture, and trigger the following event in response to the preset gesture: determining target content of the first picture according to the preset gesture, acquiring the target The feature information of the content, the one or more second pictures are obtained according to the feature information, the matching degree of the second picture and the first picture is greater than a preset threshold, and the clarity of the second picture is greater than the The clarity of the first picture;
- the display 11 is further configured to display at least one of the one or more second pictures.
- the terminal provided by the embodiment of the present invention acquires feature information of the target content of the first picture, and acquires one or more matching degrees with the first picture that are greater than a preset threshold according to the feature information, and the resolution is greater than the first picture.
- the second picture then the second picture, provides the user with a sharper picture.
- the feature information includes: a feature descriptor
- the acquiring feature information of the target content includes:
- the clarity of the second picture is greater than the clarity of the first picture, including:
- the feature points matching the feature points in the first feature point set constitute a second feature point set, and each feature point in the second feature point set is away from the center point
- the average distance is greater than the second threshold
- the center point is an average value of coordinates of each feature point in the second feature point set, and the distance between each feature point and the center point is a feature point and the center point
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set.
- the preset gesture is a zoom-in gesture
- the processor 12 is specifically configured to use the display content of the display screen as the target content.
- the preset gesture is a definite gesture
- the processor 12 is specifically configured to use the content defined by the enclosing gesture as the target content.
- the feature information further includes at least one of global positioning system GPS information and inertial measurement unit IMU information.
- the display 11 is specifically configured to display the highest resolution picture in the one or more second pictures; or
- the processor 12 is further configured to detect another pre-action on the display 11. In response to the another preset gesture, such that the display 11 displays another second picture, the other second picture having a sharpness less than the picture with the highest resolution.
- the processor 12 is further configured to determine, after the display 11 displays the at least one second picture in the one or more second pictures, the at least one second picture and the first picture a transformation matrix; transforming the at least one second picture according to the transformation matrix;
- the display 11 is specifically configured to display the transformed at least one second picture.
- the processor 12 is specifically configured to acquire one or more second pictures from a memory of the terminal according to the feature information.
- the foregoing terminal further includes a transceiver 13 configured to acquire, by the processor 12, one or more second pictures from a memory of the terminal according to the feature information; When the second picture is not acquired from the memory, the feature information is sent to the server, and the one or more second pictures sent by the server according to the feature information are received.
- a transceiver 13 configured to acquire, by the processor 12, one or more second pictures from a memory of the terminal according to the feature information; When the second picture is not acquired from the memory, the feature information is sent to the server, and the one or more second pictures sent by the server according to the feature information are received.
- FIG. 12 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
- the terminal 200 provided by the present example includes a processor 21, a memory 22, a communication interface 23, a system bus 24, and a display 25, and the memory 22 and the communication interface 23 are connected to the processor 21 via the system bus 24 and Completing communication with each other, the memory 22 is for storing computer execution instructions, the communication interface 23 is for communicating with other devices, and the processor 21 is configured to execute the computer to execute instructions for the terminal 200 to execute The various steps of the method applied to the terminal as above.
- FIG. 13 is a schematic structural diagram of a server according to an embodiment of the present invention.
- the server provided by the embodiment of the present invention can implement various steps of the method applied to the server provided by any embodiment of the present invention.
- the server provided by the embodiment of the present invention includes:
- the transceiver 32 is configured to receive feature information of the target content that is sent by the terminal, where the target content is determined by the terminal from the first image according to a preset gesture, where the first image is displayed on a display screen of the terminal;
- the processor 33 is configured to acquire one or more second pictures according to the feature information, where a matching degree between the second picture and the first picture is greater than a preset threshold, and a resolution of the second picture is greater than a Describe the clarity of the first picture;
- the transceiver 32 is further configured to send the one or more second pictures to the terminal, so that the terminal displays at least one of the one or more second pictures.
- the server provided by the embodiment of the present invention receives the feature information of the target content sent by the terminal, and acquires, according to the feature information, one or more matching degrees with the first image that are greater than a preset threshold, and the resolution is greater than the second image.
- the picture then returns a second picture to the terminal, causing the terminal to display the second picture, thereby providing the user with a higher definition picture.
- the feature information includes: a feature descriptor, where the feature extracts feature points of the target content to obtain a first feature point set, and each of the first feature point sets The feature points are described by the child.
- the feature information includes: the target content, the processor 33 is further configured to: extract a feature point from the target content, obtain a first feature point set, and use the first feature point set Each feature point generates a feature descriptor.
- the feature points that match the feature points in the first feature point set constitute a second feature point set, and each feature point in the second feature point set,
- the average distance from the center point is greater than the second threshold, and the center point is an average value of coordinates of each feature point in the second feature point set, and the distance between each feature point and the center point is the feature point The number of pixels with the center point;
- the second threshold is an average distance of each feature point in the first feature point set from a center point of the first feature point set.
- the transceiver 32 is further configured to receive the global positioning system (GPS) of the first picture sent by the terminal before the processor 33 acquires one or more second pictures according to the feature information.
- GPS global positioning system
- IMU information At least one of information and inertial measurement unit IMU information;
- the processor 33 is further configured to determine, according to the GPS information, a first set from the stored pictures, and determine, according to the IMU information, a second set from the stored pictures, where the first set is The picture has the same shooting position information as the first picture, and the picture in the second set has the same shooting direction information as the first picture; according to the feature information, from the first set and the In the second set, one or more second pictures are acquired.
- FIG. 14 is a schematic structural diagram of another server according to an embodiment of the present invention.
- the server provided in this example includes a processor 41, a memory 42, a communication interface 43, and a system bus 44, and the memory 42 and the communication interface 43 are connected to the processor 41 through the system bus 44 and complete each other.
- Communication the memory 42 is for storing computer execution instructions
- the communication interface 43 is for communicating with other devices
- the processor 41 is configured to run the computer to execute instructions to cause the server to execute the above application to the server. The various steps of the method.
- an embodiment of the present invention further provides a computer readable storage medium storing one or more programs, when the one or more programs are executed by a terminal, the terminal performs various steps of the method applied to the terminal as above.
- an embodiment of the present invention further provides a graphical user interface on a terminal, where the terminal includes a display, a memory, and multiple applications; and one or more programs for executing one or more programs stored in the memory Processors, including the graphical user interface as described above
- the method displays the user interface.
- the system bus mentioned above may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus.
- PCI peripheral component interconnect
- EISA extended industry standard architecture
- the system bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in the figure, but it does not mean that there is only one bus or one type of bus.
- the communication interface is used to implement communication between the database access device and other devices such as clients, read-write libraries, and read-only libraries.
- the memory may include random access memory (RAM), and may also include non-volatile memory, such as at least one disk storage, or a secure data card (Secure Digital Memory Card/SD). Card, SD), etc.
- the above processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; or may be a digital signal processing (DSP), dedicated integration.
- CPU central processing unit
- NP network processor
- DSP digital signal processing
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
Claims (25)
- 一种处理方法,用于具有显示屏的终端,其特征在于,包括:在所述显示屏上显示一个第一图片;检测到作用于所述第一图片的预设手势;响应所述预设手势触发以下事件:根据所述预设手势确定所述第一图片的目标内容;获取所述目标内容的特征信息;根据所述特征信息获取一个或多个第二图片,所述第二图片与所述第一图片的匹配度大于预设阈值,且所述第二图片的清晰度大于所述第一图片的清晰度;显示所述一个或多个第二图片中的至少一个。
- 根据权利要求1所述的方法,其特征在于,所述特征信息包括:特征描述子,所述获取所述目标内容的特征信息,包括:对所述目标内容提取特征点,得到第一特征点集合,并对所述第一特征点集合中的每一特征点生成特征描述子。
- 根据权利要求1或2所述的方法,其特征在于,所述第二图片的清晰度大于所述第一图片的清晰度,包括:所述第二图片的特征点中,与所述第一特征点集合中的特征点匹配的特征点构成第二特征点集合,所述第二特征点集合中的各特征点,距离中心点的平均距离,大于所述第二阈值,所述中心点为所述第二特征点集合中的各特征点的坐标的平均值,每一特征点与中心点的距离为特征点与该中心点的像素个数;其中,所述第二阈值为所述第一特征点集合中的各特征点距离第一特征点集合的中心点的平均距离。
- 根据权利要求1~3任一项所述的方法,其特征在于,所述预设手势为放大手势,所述根据所述预设手势确定所述第一图片的目标内容,包括:将所述显示屏的显示内容作为所述目标内容。
- 根据权利要求1~3任一项所述的方法,其特征在于,所述预设手势为圈定手势,所述根据所述预设手势确定所述第一图片的目标内容,包括:将所述圈定手势圈定的内容作为所述目标内容。
- 根据权利要求1~5任一项所述的方法,其特征在于,所述特征信息还包括全球定位系统GPS信息和惯性测量单元IMU信息中的至少一种。
- 根据权利要求1~6任一项所述的方法,其特征在于,所述显示所述一个或多个第二图片中的至少一个,包括:显示所述一个或多个第二图片中清晰度最高的图片;或者,全部显示所述一个或多个第二图片;或者,滚动显示所述一个或多个第二图片中的至少一个第二图片。
- 根据权利要求7所述的方法,其特征在于,当所述显示所述一个或多个第二图片中的至少一个,具体为:显示所述一个或多个第二图片中清晰度最高的图片;所述方法还包括:检测到作用于所述显示器上的另一种预设手势;响应于所述另一种预设手势,显示另一个第二图片,所述另一个第二图片的清晰度小于所述清晰度最高的图片。
- 根据权利要求1~8任一项所述的方法,其特征在于,所述显示所述一个或多个第二图片中的至少一个之前,还包括:确定至少一个第二图片与所述第一图片之间的变换矩阵;根据所述变换矩阵对所述至少一个第二图片进行变换;所述显示所述一个或多个第二图片中的至少一个,包括:显示经过变换的至少一个第二图片。
- 根据权利要求1~9任一项所述的方法,其特征在于,所述根据所述特征信息获取一个或多个第二图片,包括:根据所述特征信息,从所述终端的存储器中获取一个或多个第二图片;或者,根据所述特征信息,从服务器获取一个或多个第二图片。
- 根据权利要求1~9任一项所述的方法,其特征在于,所述根据所述特征信息获取一个或多个第二图片,包括:根据所述特征信息,从所述终端的存储器中获取一个或多个第二图片;在从所述存储器没有获取到第二图片时,将所述特征信息发送给服务器,接收所述服务器根据所述特征信息发送的一个或多个第二图片。
- 一种终端,其特征在于,包括:显示器,用于显示一个第一图片;处理器,用于检测到作用于所述第一图片的预设手势,响应所述预设手势触发以下事件:根据所述预设手势确定所述第一图片的目标内容,获取所述目标内容的特征信息,根据所述特征信息获取一个或多个第二图片,所述第二图片与所述第一图片的匹配度大于预设阈值,且所述第二图片的清晰度大于所述第一图片的清晰度;所述显示器,还用于显示所述一个或多个第二图片中的至少一个。
- 根据权利要求12所述的终端,其特征在于,所述特征信息包括:特征描述子,所述处理器获取所述目标内容的特征信息,包括:所述处理器对所述目标内容提取特征点,得到第一特征点集合,并对所述第一特征点集合中的每一特征点生成特征描述子。
- 根据权利要求12或13所述的终端,其特征在于,所述第二图片的清晰度大于所述第一图片的清晰度,包括:所述第二图片的特征点中,与所述第一特征点集合中的特征点匹配的特征点构成第二特征点集合,所述第二特征点集合中的各特征点,距离中心点的平均距离,大于所述第二阈值,所述中心点为所述第二特征点集合中的各特征点的坐标的平均值,每一特征点与中心点的距离为特征点与该中心点的像素个数;其中,所述第二阈值为所述第一特征点集合中的各特征点距离第一特征点集合的中心点的平均距离。
- 根据权利要求12~14任一项所述的终端,其特征在于,所述预设手势为放大手势,所述处理器,还用于将所述显示屏的显示内容作为所述目标内容。
- 根据权利要求12~14任一项所述的终端,其特征在于,所述预设手势为圈定手势,所述处理器,还用于将所述圈定手势圈定的内容作为所述目标内容。
- 根据权利要求12~16任一项所述的终端,其特征在于,所述特征信息还包括全球定位系统GPS信息和惯性测量单元IMU信息中的至少一种。
- 根据权利要求12~17任一项所述的终端,其特征在于,所述显示器还用于显示所述一个或多个第二图片中的至少一个,包括: 所述显示器还用于显示所述一个或多个第二图片中清晰度最高的图片;或者,全部显示所述一个或多个第二图片;或者,滚动显示所述一个或多个第二图片中的至少一个第二图片。
- 根据权利要求18所述的终端,其特征在于,当所述显示器显示所述一个或多个第二图片中清晰度最高的图片时,所述处理器还用于检测到作用于所述显示器上的另一种预设手势,响应于所述另一种预设手势,触发所述显示器显示另一个第二图片,所述另一个第二图片的清晰度小于所述清晰度最高的图片。
- 根据权利要求12~19任一项所述的终端,其特征在于,所述处理器,还用于在所述显示器显示一个或多个第二图片中的至少一个之前,确定至少一个第二图片与所述第一图片之间的变换矩阵;根据所述变换矩阵对所述至少一个第二图片进行变换;所述显示器,还用于显示经过变换的至少一个第二图片。
- 根据权利要求12~20任一项所述的终端,其特征在于,所述处理器用于根据所述特征信息获取一个或多个第二图片,包括:所述处理器用于根据所述特征信息,从所述终端的存储器中获取一个或多个第二图片;或者,根据所述特征信息,从服务器获取一个或多个第二图片。
- 根据权利要求12~20任一项所述的终端,其特征在于,还包括收发器,用于在所述处理器从所述终端的存储器没有获取到第二图片时,将所述特征信息发送给服务器,接收所述服务器根据所述特征信息发送的一个或多个第二图片。
- 一种终端,其特征在于,包括:包括:处理器、存储器、通信接口、系统总线和显示器,所述存储器和所述通信接口通过所述系统总线与所述处理器连接并完成相互间的通信,所述存储器用于存储计算机执行指令,所述通信接口用于和其他设备进行通信,所述处理器用于运行所述计算机执行指令,使所述终端执行如权利要求1-11任一项所述的方法。
- 一种存储一个或多个程序的计算机可读存储介质,其特征在于,所述一个或多个程序被终端执行时,所述终端执行如权利要求1-11中任一项所 述的方法。
- 一种终端上的图形用户界面,其特征在于,所述终端包括显示器、存储器、多个应用程序;和用于执行存储在所述存储器中的一个或多个程序的一个或多个处理器,所述图形用户界面包括根据权利要求1-11中任一项所述的方法显示的用户界面。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112018075332-7A BR112018075332A2 (pt) | 2016-06-08 | 2016-06-08 | método de processamento e terminal |
JP2018564384A JP6720353B2 (ja) | 2016-06-08 | 2016-06-08 | 処理方法及び端末 |
PCT/CN2016/085364 WO2017210908A1 (zh) | 2016-06-08 | 2016-06-08 | 处理方法与终端 |
CN201680060697.9A CN108353210B (zh) | 2016-06-08 | 2016-06-08 | 处理方法与终端 |
US16/308,342 US10838601B2 (en) | 2016-06-08 | 2016-06-08 | Processing method and terminal |
AU2016409676A AU2016409676B2 (en) | 2016-06-08 | 2016-06-08 | Processing method and terminal |
EP16904360.1A EP3461138B1 (en) | 2016-06-08 | 2016-06-08 | Processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/085364 WO2017210908A1 (zh) | 2016-06-08 | 2016-06-08 | 处理方法与终端 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017210908A1 true WO2017210908A1 (zh) | 2017-12-14 |
Family
ID=60577504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/085364 WO2017210908A1 (zh) | 2016-06-08 | 2016-06-08 | 处理方法与终端 |
Country Status (7)
Country | Link |
---|---|
US (1) | US10838601B2 (zh) |
EP (1) | EP3461138B1 (zh) |
JP (1) | JP6720353B2 (zh) |
CN (1) | CN108353210B (zh) |
AU (1) | AU2016409676B2 (zh) |
BR (1) | BR112018075332A2 (zh) |
WO (1) | WO2017210908A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111240562A (zh) * | 2018-11-28 | 2020-06-05 | 阿里巴巴集团控股有限公司 | 数据处理方法、装置、终端设备及计算机存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113987228A (zh) * | 2018-06-20 | 2022-01-28 | 华为技术有限公司 | 一种数据库构建方法、一种定位方法及其相关设备 |
CN116824183B (zh) * | 2023-07-10 | 2024-03-12 | 北京大学 | 基于多重特征描述符的图像特征匹配方法和装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103064921A (zh) * | 2012-12-20 | 2013-04-24 | 北京工业大学 | 一种实现博物馆智能数字导游的方法 |
CN103155585A (zh) * | 2010-07-12 | 2013-06-12 | 欧普斯梅迪库斯股份有限公司 | 网络式上下文关联图像的高分辨率查看系统与方法 |
CN103473565A (zh) * | 2013-08-23 | 2013-12-25 | 华为技术有限公司 | 一种图像匹配方法和装置 |
JP2014197802A (ja) * | 2013-03-29 | 2014-10-16 | ブラザー工業株式会社 | 作業支援システムおよびプログラム |
US20150070357A1 (en) * | 2013-09-09 | 2015-03-12 | Opus Medicus, Inc. | Systems and methods for high-resolution image viewing |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003006198A (ja) * | 2001-04-20 | 2003-01-10 | Canon Inc | 画像処理装置およびその方法、並びに、サーバ装置 |
WO2006081634A2 (en) * | 2005-02-04 | 2006-08-10 | Barco N.V. | Method and device for image and video transmission over low-bandwidth and high-latency transmission channels |
US20090193021A1 (en) | 2008-01-29 | 2009-07-30 | Gupta Vikram M | Camera system and method for picture sharing based on camera perspective |
US8401342B2 (en) * | 2009-01-16 | 2013-03-19 | A9.Com, Inc. | System and method to match images using topologically equivalent correspondences |
US20100331041A1 (en) * | 2009-06-26 | 2010-12-30 | Fuji Xerox Co., Ltd. | System and method for language-independent manipulations of digital copies of documents through a camera phone |
US8818274B2 (en) | 2009-07-17 | 2014-08-26 | Qualcomm Incorporated | Automatic interfacing between a master device and object device |
US8667054B2 (en) | 2010-07-12 | 2014-03-04 | Opus Medicus, Inc. | Systems and methods for networked, in-context, composed, high resolution image viewing |
US9503497B2 (en) | 2011-12-10 | 2016-11-22 | LogMeln, Inc. | Optimizing transfer to a remote access client of a high definition (HD) host screen image |
KR102091137B1 (ko) * | 2012-07-17 | 2020-03-20 | 삼성전자주식회사 | 영상 제공 시스템 및 방법 |
US8717500B1 (en) * | 2012-10-15 | 2014-05-06 | At&T Intellectual Property I, L.P. | Relational display of images |
CN103093680B (zh) | 2012-11-22 | 2014-03-05 | 北京欧本科技有限公司 | 一种展示高分辨率图像的方法及系统 |
US20150350565A1 (en) | 2014-05-29 | 2015-12-03 | Opentv, Inc. | Techniques for magnifying a high resolution image |
JP2016006635A (ja) | 2014-05-29 | 2016-01-14 | パナソニック株式会社 | 制御方法及びプログラム |
US10599810B2 (en) * | 2014-06-04 | 2020-03-24 | Panasonic Corporation | Control method and recording system |
CN104991702B (zh) * | 2015-06-30 | 2017-12-15 | 广东欧珀移动通信有限公司 | 一种终端展示图片的方法及终端 |
CN105389094B (zh) * | 2015-11-05 | 2019-06-25 | 上海斐讯数据通信技术有限公司 | 一种具有触摸显示屏的电子设备及其信息处理方法 |
-
2016
- 2016-06-08 AU AU2016409676A patent/AU2016409676B2/en active Active
- 2016-06-08 BR BR112018075332-7A patent/BR112018075332A2/pt unknown
- 2016-06-08 JP JP2018564384A patent/JP6720353B2/ja active Active
- 2016-06-08 EP EP16904360.1A patent/EP3461138B1/en active Active
- 2016-06-08 CN CN201680060697.9A patent/CN108353210B/zh active Active
- 2016-06-08 WO PCT/CN2016/085364 patent/WO2017210908A1/zh unknown
- 2016-06-08 US US16/308,342 patent/US10838601B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103155585A (zh) * | 2010-07-12 | 2013-06-12 | 欧普斯梅迪库斯股份有限公司 | 网络式上下文关联图像的高分辨率查看系统与方法 |
CN103064921A (zh) * | 2012-12-20 | 2013-04-24 | 北京工业大学 | 一种实现博物馆智能数字导游的方法 |
JP2014197802A (ja) * | 2013-03-29 | 2014-10-16 | ブラザー工業株式会社 | 作業支援システムおよびプログラム |
CN103473565A (zh) * | 2013-08-23 | 2013-12-25 | 华为技术有限公司 | 一种图像匹配方法和装置 |
US20150070357A1 (en) * | 2013-09-09 | 2015-03-12 | Opus Medicus, Inc. | Systems and methods for high-resolution image viewing |
Non-Patent Citations (1)
Title |
---|
See also references of EP3461138A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111240562A (zh) * | 2018-11-28 | 2020-06-05 | 阿里巴巴集团控股有限公司 | 数据处理方法、装置、终端设备及计算机存储介质 |
CN111240562B (zh) * | 2018-11-28 | 2023-04-25 | 阿里巴巴集团控股有限公司 | 数据处理方法、装置、终端设备及计算机存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3461138B1 (en) | 2021-09-22 |
US10838601B2 (en) | 2020-11-17 |
US20190212903A1 (en) | 2019-07-11 |
EP3461138A4 (en) | 2019-04-10 |
AU2016409676B2 (en) | 2020-01-30 |
JP2019522848A (ja) | 2019-08-15 |
BR112018075332A2 (pt) | 2019-03-19 |
EP3461138A1 (en) | 2019-03-27 |
CN108353210B (zh) | 2021-01-29 |
AU2016409676A1 (en) | 2019-01-17 |
CN108353210A (zh) | 2018-07-31 |
JP6720353B2 (ja) | 2020-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10140743B2 (en) | Displaying clusters of media items on a map using representative media items | |
US20170032219A1 (en) | Methods and devices for picture processing | |
US10796447B2 (en) | Image detection method, apparatus and system and storage medium | |
US10438086B2 (en) | Image information recognition processing method and device, and computer storage medium | |
WO2021208667A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
KR20160044470A (ko) | 배경 이미지를 설정하기 위한 방법, 서버 및 시스템 | |
KR20150059466A (ko) | 전자장치에서 이미지 내의 특정 객체를 인식하기 위한 방법 및 장치 | |
US20200402040A1 (en) | Data processing method, terminal device and data processing system | |
EP4191513A1 (en) | Image processing method and apparatus, device and storage medium | |
WO2017107855A1 (zh) | 一种图片搜索方法及装置 | |
WO2017210908A1 (zh) | 处理方法与终端 | |
GB2598015A (en) | Action recognition method and device for target object, and electronic apparatus | |
US20200005689A1 (en) | Generating three-dimensional user experience based on two-dimensional media content | |
WO2020119315A1 (zh) | 人脸采集方法及相关产品 | |
WO2011104698A2 (en) | Method and apparatus providing for control of a content capturing device with a requesting device to thereby capture a desired content segment | |
US20150112997A1 (en) | Method for content control and electronic device thereof | |
WO2019075644A1 (zh) | 人像照片的搜索方法和终端 | |
JP2023519755A (ja) | 画像レジストレーション方法及び装置 | |
US11024305B2 (en) | Systems and methods for using image searching with voice recognition commands | |
EP2784736A1 (en) | Method of and system for providing access to data | |
WO2020125014A1 (zh) | 一种信息处理方法、服务器、终端及计算机存储介质 | |
CN105975621B (zh) | 识别浏览器页面中的搜索引擎的方法及装置 | |
US20150035864A1 (en) | Method, apparatus, computer program and user interface | |
US20200409521A1 (en) | Method for obtaining vr resource and terminal | |
WO2023095770A1 (ja) | 拡張現実表示装置、サーバ装置、拡張現実表示システム、拡張現実表示方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16904360 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018564384 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112018075332 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2016904360 Country of ref document: EP Effective date: 20181220 |
|
ENP | Entry into the national phase |
Ref document number: 2016409676 Country of ref document: AU Date of ref document: 20160608 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112018075332 Country of ref document: BR Kind code of ref document: A2 Effective date: 20181206 |