CN111240563A - Information display method, device, equipment and storage medium - Google Patents

Information display method, device, equipment and storage medium Download PDF

Info

Publication number
CN111240563A
CN111240563A CN201911360901.0A CN201911360901A CN111240563A CN 111240563 A CN111240563 A CN 111240563A CN 201911360901 A CN201911360901 A CN 201911360901A CN 111240563 A CN111240563 A CN 111240563A
Authority
CN
China
Prior art keywords
target area
image
information
identification information
interacted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360901.0A
Other languages
Chinese (zh)
Other versions
CN111240563B (en
Inventor
罗飞虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911360901.0A priority Critical patent/CN111240563B/en
Publication of CN111240563A publication Critical patent/CN111240563A/en
Application granted granted Critical
Publication of CN111240563B publication Critical patent/CN111240563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an information display method, an information display device, information display equipment and a storage medium, wherein the method comprises the following steps: responding to the click operation of the image to be interacted, and determining click position information; determining a target area of the click position information in the image to be interacted; highlighting the target area and the identification information in the target area; the identification information in the target area is information which is pre-configured in the target area and can represent the characteristics of the target area; responding to an adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request. The invention can realize the accurate interaction between the user operation and the image to be interacted and simultaneously provide rich and definite information guidance for the user.

Description

Information display method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an information display method, apparatus, device, and storage medium.
Background
When the image containing a large amount of data information is displayed by taking the electronic screen as a medium, the user can hardly find the interest point according to the requirement in a short time due to the large amount of data information contained in the image and the single display form. Taking a map as an example, the map is an objective world model and is a carrier of spatial information, compared with a paper map, the electronic map realizes the separation of data storage and data display, the displayed information amount is larger, and the information types are richer; the electronic map may include natural geographic elements (such as water systems, vegetation, traffic, and the like), socioeconomic elements (such as administrative areas, and the like), service information (such as real-time road conditions, scenic spot scenes, and the like), and natural and social multidimensional information (such as population distribution thermodynamic diagrams, air quality diagrams, and the like).
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an information display method, apparatus, device and storage medium, which can realize accurate interaction between user operation and an image to be interacted, and provide rich and clear information guidance for a user, so that the user can conveniently find a target point.
In order to solve the technical problem, in one aspect, the present invention provides an information displaying method, including:
responding to the click operation of the image to be interacted, and determining click position information;
determining a target area of the click position information in the image to be interacted;
highlighting the target area and the identification information in the target area; the identification information in the target area is information which is pre-configured in the target area and can represent the characteristics of the target area;
responding to an adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request.
In another aspect, the present invention provides an information display apparatus, including:
the click position information response module is used for responding to the click operation of the image to be interacted and determining click position information;
the target area determining module is used for determining a target area to which the click position information belongs in the image to be interacted;
the highlighting module is used for highlighting the target area and the identification information in the target area; the identification information in the target area is information which is pre-configured in the target area and can represent the characteristics of the target area;
and the adjustment request response module is used for responding to an adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request.
In another aspect, the present invention provides an apparatus, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the information presentation method as described above.
In another aspect, the present invention provides a computer storage medium, where at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded by a processor and executes the information presentation method as described above.
The embodiment of the invention has the following beneficial effects:
according to the method, the click position information and the target area of the click position information in the image to be interacted are determined by responding to the click operation of a user on the image to be interacted; highlighting the target area and the identification information in the target area; responding to an adjustment request of a user for the image to be interacted, adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on an adjustment request opposite to the adjustment request. The invention can realize the accurate interaction between the user operation and the image to be interacted and vividly display the regional characteristics in the image to be interacted, thereby providing rich and definite information guidance for the user and facilitating the user to search a target point.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the invention;
fig. 2 is a flowchart of an information displaying method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for dividing a region of a target image according to an embodiment of the present invention;
fig. 4 is a flowchart of a target area determining method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for generating an image to be interacted according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for highlighting a target area according to an embodiment of the present invention;
fig. 7 is a flowchart of a target area adjustment method according to an embodiment of the present invention;
fig. 8 is a flowchart of a method for displaying associated information according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of region division according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a map rendering process provided by an embodiment of the invention;
FIG. 11 is a schematic diagram illustrating a map information display according to an embodiment of the present invention;
FIG. 12 is a schematic view of an information display device according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, the following explanations are made with respect to the terms involved in the embodiments of the present specification:
canvas: the elements are part of HTML5, allowing the scripting language to dynamically render bit images.
RGB color: it is a color standard in the industry to obtain various colors by changing three color channels of red (R), green (G) and blue (B) and superimposing them with each other, where RGB is the color representing the three color channels of red, green and blue. The web page color is based on the optical color RGB (red, green, blue), expressed in 16-ary code, and has the general format # DEFABC (letters ranging from a-F, numbers from 0-9); if black, in the web page code is: # 000000.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the invention is shown, where the implementation environment may include: at least a first terminal 110 and a second terminal 120, said first terminal 110 and said second terminal 120 being capable of data communication over a network.
Specifically, the first terminal 110 obtains relevant data from the second terminal 120 according to a requirement, and the first terminal 110 performs configuration rendering based on the obtained relevant data to generate an image to be interacted; when the first terminal 110 receives an operation request of the user for the image to be interacted, displaying image information after corresponding operation for the user; or, the second terminal 120 performs configuration rendering based on the related data to generate an image to be interacted, sends the image information to be interacted to the first terminal 110, and displays the image information after performing the corresponding operation for the user when the first terminal 110 receives an operation request of the user for the image to be interacted.
The first terminal 110 may communicate with the second terminal 120 based on a Browser/Server mode (Browser/Server, B/S) or a Client/Server mode (Client/Server, C/S). The first terminal 110 may include: the physical devices may also include software running in the physical devices, such as application programs and the like. The operating system running on the first terminal 110 in the embodiment of the present invention may include, but is not limited to, an android system, an IOS system, linux, windows, and the like.
The second terminal 120 and the first terminal 110 may establish a communication connection through a wired or wireless connection, and the second terminal 120 may include an independently operating server, or a distributed server, or a server cluster composed of multiple servers, where the server may be a cloud server.
In order to solve the technical problem in the prior art that an information display form is single or information display is not effective enough when an image containing a large amount of data information is displayed, an embodiment of the present invention provides an information display method, which may specifically refer to fig. 2, where an execution main body of the information display method may be the first terminal of fig. 1, and specifically, the method may include:
s210, responding to the click operation of the image to be interacted, and determining click position information.
The image to be interacted can be generated by performing configuration rendering based on related data in advance, and the click position information can be specifically coordinate values of a click position; in the embodiment of the present specification, a coordinate system may be established based on a certain point in the image to be interacted as an origin, so that each point in the image to be interacted has a coordinate value corresponding to the point, and specifically, a rectangular coordinate system may be established with an upper left corner of the image as the origin.
When receiving the clicking operation of the user on the current image to be interacted, determining the relevant information of the clicked position of the user.
S220, determining a target area of the click position information in the image to be interacted.
Since the image to be interacted may include a plurality of different regions, after the click position information is determined, the region of the click position information in the image to be interacted needs to be determined, and therefore, a process of region division may also be included before determining the region to which the click position information belongs, specifically refer to fig. 3, which shows a method for region division of a target image, where the method may include:
and S310, carrying out region division on the target image to obtain a region division image consisting of a plurality of regions.
The area division of the target image may be based on a related division rule, for example, when the target image is a map, the map may be divided into a plurality of different areas based on the division of the administrative area, and the map may be subjected to delineation division, thereby obtaining an area division image composed of a plurality of areas.
S320, filling colors for each area in the area division image, wherein the colors filled in any two areas are different.
S330, establishing a corresponding relation between each area and the corresponding filling color.
S340, determining the filling color of each point in the area division image, and establishing a corresponding relation between the coordinate value of each point and the corresponding filling color.
Filling specific RGB colors into each region, and storing the corresponding relation between the regions and the corresponding RGB colors into a color region list RGBZoneList; acquiring an RGB value corresponding to each coordinate point in the area division image through a canvas interface, and storing the corresponding relation between the coordinate point and the corresponding RGB color into a color coordinate point list RGBPotList; the color area list RGBZoneList is established, and the color coordinate point list RGBPotList is convenient for determining the area to which the coordinate point belongs according to the coordinate value of the coordinate point subsequently. Here, the coordinate system may be established by taking the upper left corner of the target image as the origin, and the coordinate system in step S210 is the coordinate system, and since the contour of the target image is not changed during the processing of the target image, the coordinate system is not changed, and the coordinate values of the corresponding points are not changed.
For example, the correspondence list RGBZoneList for RGB colors and regions may be:
{'#cccec0':'p31','#ccd0b4':'p26','#ccd2a8':'p28','#ccd49c':'p29','#ccd690':'p27','#ccd884':'p30','#ccda78':'p20','#ccdc6c':'p25','#ccde60':'p21','#cce054':'p22','#cce248':'p23','#cce43c':'p24','#cce630':'p13','#cce824':'p19','#ccea18':'p14','#ccec0c':'p15','#ccee00':'p16','#cceff4':'p18','#ccf1e8':'p17','#ccf3dc':'p9','#ccf5d0':'p8','#ccf7c4':'p11','#ccf9b8':'p10','#ccfbac':'p12','#ccfda0':'p6','#ccff94':'p4','#};
wherein for ' # cccec0': p31' can be understood as: the coloring corresponding to the region p31 is # cccec0, and the rest of the corresponding relations are similar to the description.
After the area division is performed, an area to which a click position belongs may be determined based on a position of a user click operation, specifically refer to fig. 4, which shows a target area determining method, where the method may include:
and S410, determining a target filling color corresponding to the coordinate value of the click position based on the corresponding relation between the coordinate value of each point and the corresponding filling color.
S420, determining a target area corresponding to the target filling color based on the corresponding relation between each area and the corresponding filling color.
Firstly, determining a target filling color corresponding to the coordinate value of the coordinate point of the click position of the user according to the color coordinate point list RGBPOTList; and determining a target area corresponding to the target filling color according to the color area list RGBZonelList. For example, when the user clicks the image to be interacted, the corresponding RGB value (e.g., '# ccea18') is obtained through the clicked coordinate point, and then the area corresponding to the RGB value is queried.
Therefore, based on the two corresponding relations, the effect of determining the selected target area based on the click position of the user is achieved.
For the above specific generation method of the image to be interacted, referring to fig. 5, the method may include:
and S510, traversing each area in the area division image.
According to a certain traversal sequence, each region in the region division image is traversed, and the following operations are carried out on each region:
s520, for the current area, acquiring to-be-configured identification information in the current area, wherein the to-be-configured identification information comprises position information.
In this embodiment of the present specification, the position information of the to-be-configured identification information may specifically refer to position information of the to-be-configured identification information in a current region, or may also refer to position information of the to-be-configured identification information in a whole region divided image. For example, the distance range is used for illustration, and the position information of a certain identification information to be configured is the center position of the current area, where the center position may be limited to a controllable range, and is not necessarily an absolute center position.
S530, configuring the identification information to be configured to the corresponding position of the current area based on the position information of the identification information to be configured.
And flexibly configuring the identification information required by the current region to a corresponding position by using a visual editing tool, such as photoshop, flash and the like.
And S540, after the identification information configuration is completed for each area, deriving the area division image containing the configured identification information.
The area division image containing the configured identification information includes the area division image filled with colors, and the identification information and the position information contained in each area.
And S550, rendering the derived region division image containing the configured identification information to generate the image to be interacted.
And rendering the area division image containing the configured identification information on the page through canvas to generate a complete image to be interacted for the user to interact.
And S230, highlighting the target area and the identification information in the target area. The identification information in the target area is information which is configured in the target area in advance and can represent the characteristics of the target area.
The identification information in the embodiments of the present description may be an identifier picture or an identifier description, where the identifier in each area may be an object representing the characteristics of the area, or may be a representative object in the area, such as a mountain, a building, a scenic spot, or the like; when the identification information is an identifier picture, the corresponding identifier picture can be added to the corresponding position of the area when the identifier configuration is carried out; when the identification information is the description of the identifier, corresponding identifier text information can be added to the corresponding position of the area when the identifier is configured.
When the area selected by the user is determined, the target area may be highlighted, so as to facilitate the user to view the relevant information of the area in detail, specifically referring to fig. 6, which illustrates a target area highlighting method, which may include:
s610, determining the level of each area in the image to be interacted.
S620, placing the level of the target area at the top layer, and placing the level of the area except the target area at the lower layer of the mask layer.
And S630, highlighting the target area and the identification information in the target area.
After a certain area is selected, the selected area is placed at the topmost layer through dynamic change of the rendering level, the selected area and the representation information in the selected area are highlighted, other areas are located below the mask layer, and the corresponding code of the process is implemented as follows:
this.zone.setChildIndex(this.maskSpt,this.zone.numChildren-1)var
spt=this.zone['bp'+id];
this.zone.setChildIndex(spt,this.zone.numChildren-1)
s240, responding to an adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request.
After the target area selected by the user is highlighted, the adjustment operation of the highlighted area by the user may be further continuously received, and in particular, referring to fig. 7, a target area adjustment method is shown, and the method may include:
and S710, analyzing the adjustment request, and determining the type of the adjustment request, wherein the adjustment request comprises an adjustment proportion, and the type of the adjustment request comprises an amplification request and a reduction request.
S720, when the type of the adjustment request is an amplification request, amplifying the target area based on the adjustment proportion, and reducing the identification information in the target area based on the adjustment proportion.
And S730, when the type of the adjustment request is a reduction request, reducing the target area based on the adjustment proportion, and amplifying the identification information in the target area based on the adjustment proportion.
In the embodiment of the invention, the whole image to be interacted is consistent with the zooming operation of the target area, namely the target area is amplified, the whole image to be interacted is amplified, the target area is reduced, and the whole image to be interacted is reduced; the method comprises the steps that the target area and identification information in the target area are adjusted by recognizing adjustment operation of a user on the target area, and specifically, when the first terminal is a smart phone, a specific adjustment request can be recognized through a zooming operation gesture of the user on an electronic screen; when the first terminal is a desktop computer, a specific adjustment request can be identified through the sliding operation of the mouse, and the corresponding adjustment proportion can be determined according to the degree of the gesture of the user for zooming in and zooming out or the degree of the mouse sliding.
When the adjustment request is an amplification request, amplifying the target area and reducing the identification information in the target area; when the adjustment request is a reduction request, reducing the target area, and amplifying the identification information in the target area, namely reversely scaling the identification information relative to the target area to clearly anchor the position of the marking information in the target area and to make the proportion of the marking information more coordinated with the target area, so that a user can know the more accurate position of the marking information; specifically, because the identification information in each region is configured based on the location information of the identification information in the region, when the adjustment request is an amplification request, the target region is amplified in a corresponding proportion, and the identification information in the target region is reduced in a corresponding proportion, so that the identification information in the region is continuously reduced in the process of amplifying the target region, the accurate location information of the identification information in the target region can be displayed, and guidance can be provided for searching other specific target points on the region based on the accurate location information; when the adjustment request is a reduction request, reducing the target area in a corresponding proportion, and amplifying the identification information in the target area in a corresponding proportion, so that in the process of reducing the whole image, the identification information in the target area and the approximate position of the identification information in the target area can still be clearly seen, the identification information and the approximate position of each area in the whole image can be known globally, meanwhile, global guidance is provided for searching for the target point in the image, for example, the target point needs to be searched in a certain area and the identification information in the area is known, the identification information is found in the global image and the area where the identification information is located is determined as the target area, and after the target area is determined, other target points are searched based on the position information of the identification information in the target area.
The specific code is implemented as follows:
Figure BDA0002337140580000101
after determining the selected target area, the user may select a desired target point from the target area as needed, specifically refer to fig. 8, which shows a method for displaying associated information, where the method may include:
and S810, determining associated display information corresponding to each point in the image to be interacted in advance.
And S820, responding to the selection operation of the target point in the target area.
S830, target association display information corresponding to the target point is obtained.
And S840, displaying the target association display information in a preset form.
Therefore, when a user needs to select a certain target point in the image to be interacted, when the area where the target point is located is known, the target point can be selected by clicking the area, guidance is conducted on the area and the highlighting effect of the identification information in the area, and then the target point can be conveniently found by combining with the zooming-in and zooming-out operation of the area, so that the searching time is saved.
After the target point is found, displaying the associated display information of the target point according to the corresponding relation between the preset target point and the associated display information, wherein the specific display form can include but is not limited to a character form and a picture form; in addition, after the target point is found, navigation information from the current position to the target point can also be generated.
In the information display method provided in the embodiments of the present specification, the implementation carrier may be a web page, an individual APP, or an applet embedded in a related APP; in the specific implementation process, the step of generating the image to be interacted and the step of interacting the image to be interacted by the user through the terminal can be implemented in the same terminal or different terminals; when the image interaction is realized at the same terminal, the realization terminal firstly generates an image to be interacted based on the related data and receives the operation which is initiated by a user through the realization terminal and corresponds to the image to be interacted; when the interaction is realized at different terminals, the first realization terminal firstly generates an image to be interacted based on the related data, packages the image to be interacted and then sends the image to the second realization device, and the second realization device receives and analyzes the packaged image to be interacted, so that the image to be interacted can be stored locally, and the operation corresponding to the image to be interacted, initiated by a user through the second realization terminal, is received.
The following describes a specific implementation process of the present invention by taking a specific example, and taking a map as an example, the implementation process may include:
1. and drawing a map with a required style according to the real map information, dividing the map based on the administrative region, for example, dividing the map according to provinces, and drawing an independent region.
2. A map of the sensory application is made (not necessarily presented) and the irregular boundaries of the different areas therein are filled with the corresponding RGB color values, as shown in fig. 9; thereby, it is possible to realize:
the coordinates of the clicking position are obtained when the user clicks, and the area clicked by the user can be identified according to the RGB color values corresponding to the coordinates of the clicking position, so that accurate response to the special irregular boundaries of the map is achieved;
the user can obtain the numerical values of the map which is enlarged, reduced and moved by the user through the stretching, converging and moving of the two fingers on the screen.
3. Based on a label client supporting html5, rendering a map of the sketched area to a page through canvas; specifically, rendering the special drawing picture on a web page through an interface such as a dram image of a browser canvas tag; thereby, it is possible to realize:
the identifiers corresponding to the characteristics of different areas in the map are rendered to the upper layer of the map through the position configuration information, so that the map is more abundantly represented;
when user interaction is sensed, the corresponding area is highlighted: displaying the area on the highest layer, displaying a mask layer, and simultaneously rendering other areas below the mask layer;
when a user operates the screen with two fingers, the map is enlarged and reduced by the aid of the two-finger interaction center point, and the markers on the map are dynamically and synchronously reversely zoomed, so that the markers show the visual effect of the enlarged scale and position in the middle layer of the map.
Specifically, referring to fig. 10, a map rendering process diagram is shown, in which fig. 10(a) shows a map displaying a separate area in an editor, and fig. 10(b) shows a marker being configured on the separate displayed area; fig. 10(c) shows a full view of the map after the rendering is completed.
Referring to FIG. 11, a map information presentation diagram is shown, wherein FIG. 11(a) shows that when a user selects an area, the area and the identifier in the area are highlighted; when the user determines that the selection is good, the linked interaction effect between the pop-up layer and the map can be realized, for example, in a question-answer form, as shown in fig. 11 (b).
In the above example, the interactive operation based on the map image is more accurate based on the identification of the pixel value, the markers in the area can be configured, the markers can be dynamically displayed, and the map information display is richer and more spacious. The specific application field of the invention includes but is not limited to the map field, and any field of interaction and information display based on images can adopt the invention.
According to the method, the click position information and the target area of the click position information in the image to be interacted are determined by responding to the click operation of a user on the image to be interacted; highlighting the target area and the identification information in the target area; responding to an adjustment request of a user for the image to be interacted, adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on an adjustment request opposite to the adjustment request. The invention can realize the accurate interaction between the user operation and the image to be interacted and vividly display the regional characteristics in the image to be interacted, thereby providing rich and definite information guidance for the user and facilitating the user to search a target point.
The present embodiment further provides an information displaying apparatus, referring to fig. 12, the apparatus may include:
the click position information response module 1210 is used for responding to the click operation of the image to be interacted and determining click position information;
a target area determining module 1220, configured to determine a target area to which the click position information belongs in the image to be interacted;
a highlighting module 1230, configured to highlight the target area and the identification information in the target area; the identification information in the target area is information which is pre-configured in the target area and can represent the characteristics of the target area;
an adjustment request response module 1240, configured to, in response to an adjustment request for the image to be interacted, perform corresponding adjustment on the target area based on the adjustment request, and perform adjustment on the identification information in the target area based on an adjustment request opposite to the adjustment request.
Further, the apparatus further comprises a region dividing module, the region dividing module comprising:
the first division module is used for carrying out region division on the target image to obtain a region division image consisting of a plurality of regions;
the color filling module is used for filling colors into each region in the region division image, wherein the colors filled into any two regions are different;
the first establishing module is used for establishing the corresponding relation between each area and the corresponding filling color;
and the second establishing module is used for determining the filling color of each point in the area division image and establishing the corresponding relation between the coordinate value of each point and the corresponding filling color.
Further, the click position information is a coordinate value of a click position; accordingly, the target area determination module 1220 includes:
the first determining module is used for determining a target filling color corresponding to the coordinate value of the click position based on the corresponding relation between the coordinate value of each point and the corresponding filling color;
and the second determining module is used for determining the target area corresponding to the target filling color based on the corresponding relation between each area and the corresponding filling color.
Further, the device also comprises an image to be interacted generation module, wherein the image to be interacted generation module comprises:
the traversing module is used for traversing each region in the region division image;
the identification information acquisition module is used for acquiring identification information to be configured in the current area, wherein the identification information to be configured comprises position information;
the identification information configuration module is used for configuring the identification information to be configured to the corresponding position of the current area based on the position information of the identification information to be configured;
the export module is used for exporting the area division image containing the configured identification information after the identification information configuration is completed for each area;
and the first generation module is used for rendering the derived area division image containing the configured identification information and generating the image to be interacted.
Further, the highlighting module 1230 includes:
the hierarchy determining module is used for determining the hierarchy of each region in the image to be interacted;
the hierarchical placement module is used for placing the hierarchy of the target area at the top layer and placing the hierarchy of the areas except the target area at the lower layer of the mask layer;
and the highlight display module is used for highlighting the target area and the identification information in the target area.
Further, the adjustment request response module 1240 includes:
the analysis module is used for analyzing the adjustment request and determining the type of the adjustment request, wherein the adjustment request comprises an adjustment proportion, and the type of the adjustment request comprises an amplification request and a reduction request;
a first adjusting module, configured to, when the type of the adjustment request is an amplification request, amplify the target area based on the adjustment ratio, and reduce the identification information in the target area based on the adjustment ratio;
and the second adjusting module is used for reducing the target area based on the adjusting proportion and amplifying the identification information in the target area based on the adjusting proportion when the type of the adjusting request is a reducing request.
Further, the apparatus further comprises an association display module, the association display module comprising:
the associated display information determining module is used for determining associated display information corresponding to each point in the image to be interacted in advance;
the first response module is used for responding to the selection operation of a target point in the target area;
the first acquisition module is used for acquiring target association display information corresponding to the target point;
and the first display module is used for displaying the target association display information in a preset form.
The device provided in the above embodiments can execute the method provided in any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the method. Technical details that have not been elaborated upon in the above-described embodiments may be referred to a method provided in any embodiment of the invention.
The present embodiment also provides a computer-readable storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded by a processor and executes any one of the methods described in the present embodiment.
Further, fig. 13 shows a hardware structure diagram of an apparatus for implementing the method provided by the embodiment of the present invention, and the apparatus may participate in constituting or including the apparatus provided by the embodiment of the present invention. As shown in fig. 13, the device 10 may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission device 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 13 is only an illustration and is not intended to limit the structure of the electronic device. For example, device 10 may also include more or fewer components than shown in FIG. 13, or have a different configuration than shown in FIG. 13.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the device 10 (or mobile device). As referred to in the embodiments of the invention, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the method described in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104, so as to implement a player preloading method or a player running method as described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of such networks may include wireless networks provided by the communication provider of the device 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the device 10 (or mobile device).
Any of the methods described above in this embodiment can be implemented based on the apparatus shown in fig. 13.
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The steps and sequences recited in the embodiments are but one manner of performing the steps in a multitude of sequences and do not represent a unique order of performance. In the actual system or interrupted product execution, it may be performed sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The configurations shown in the present embodiment are only partial configurations related to the present invention, and do not constitute a limitation on the devices to which the present invention is applied, and a specific device may include more or less components than those shown, or combine some components, or have different arrangements of components. It should be understood that the methods, apparatuses, and the like disclosed in the embodiments may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or unit modules.
Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An information display method, comprising:
responding to the click operation of the image to be interacted, and determining click position information;
determining a target area of the click position information in the image to be interacted;
highlighting the target area and the identification information in the target area; the identification information in the target area is information which is pre-configured in the target area and can represent the characteristics of the target area;
responding to an adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request.
2. The information presentation method according to claim 1, wherein responding to the click operation on the image to be interacted before further comprises:
carrying out region division on the target image to obtain a region division image consisting of a plurality of regions;
filling colors for each region in the region division image, wherein the colors filled in any two regions are different;
establishing a corresponding relation between each area and the corresponding filling color;
and determining the filling color of each point in the area division image, and establishing a corresponding relation between the coordinate value of each point and the corresponding filling color.
3. The information presentation method according to claim 2, wherein the click position information is a coordinate value of a click position;
correspondingly, the determining the target area to which the click position information belongs in the image to be interacted includes:
determining a target filling color corresponding to the coordinate value of the click position based on the corresponding relation between the coordinate value of each point and the corresponding filling color;
and determining a target area corresponding to the target filling color based on the corresponding relation between each area and the corresponding filling color.
4. The information presentation method according to claim 2, wherein the step of performing the area division on the target image to obtain the area division image composed of a plurality of areas further comprises:
traversing each region in the region division image;
for a current area, acquiring identification information to be configured in the current area, wherein the identification information to be configured comprises position information;
configuring the identification information to be configured to the corresponding position of the current area based on the position information of the identification information to be configured;
after the identification information configuration is completed for each area, deriving an area division image containing the configured identification information;
rendering the derived area division image containing the configured identification information to generate the image to be interacted.
5. The information presentation method according to claim 1, wherein the highlighting the target area and the identification information in the target area comprises:
determining the level of each region in the image to be interacted;
placing the level of the target area at the top layer, and placing the level of the area except the target area at the lower layer of the mask layer;
and highlighting the target area and the identification information in the target area.
6. The information presentation method according to claim 1, wherein the responding to the adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request comprises:
analyzing the adjustment request, and determining the type of the adjustment request, wherein the adjustment request comprises an adjustment proportion, and the type of the adjustment request comprises an amplification request and a reduction request;
when the type of the adjustment request is an amplification request, amplifying the target area based on the adjustment proportion, and reducing the identification information in the target area based on the adjustment proportion;
and when the type of the adjustment request is a reduction request, reducing the target area based on the adjustment proportion, and amplifying the identification information in the target area based on the adjustment proportion.
7. The method of claim 1, further comprising:
determining associated display information corresponding to each point in the image to be interacted in advance;
responding to the selection operation of a target point in the target area;
acquiring target association display information corresponding to the target point;
and displaying the target association display information in a preset form.
8. An information presentation device, comprising:
the click position information response module is used for responding to the click operation of the image to be interacted and determining click position information;
the target area determining module is used for determining a target area to which the click position information belongs in the image to be interacted;
the highlighting module is used for highlighting the target area and the identification information in the target area; the identification information in the target area is information which is pre-configured in the target area and can represent the characteristics of the target area;
and the adjustment request response module is used for responding to an adjustment request of the image to be interacted, correspondingly adjusting the target area based on the adjustment request, and adjusting the identification information in the target area based on the adjustment request opposite to the adjustment request.
9. An apparatus comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the information presentation method according to any one of claims 1 to 7.
10. A computer storage medium, wherein at least one instruction or at least one program is stored in the storage medium, and the at least one instruction or the at least one program is loaded by a processor and executes the information presentation method according to any one of claims 1 to 7.
CN201911360901.0A 2019-12-25 2019-12-25 Information display method, device, equipment and storage medium Active CN111240563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360901.0A CN111240563B (en) 2019-12-25 2019-12-25 Information display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360901.0A CN111240563B (en) 2019-12-25 2019-12-25 Information display method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111240563A true CN111240563A (en) 2020-06-05
CN111240563B CN111240563B (en) 2022-02-18

Family

ID=70869322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360901.0A Active CN111240563B (en) 2019-12-25 2019-12-25 Information display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111240563B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800048A (en) * 2012-07-06 2012-11-28 广州亿程交通信息有限公司 Electronic map scaling display method
CN104134398A (en) * 2013-05-03 2014-11-05 腾讯科技(深圳)有限公司 Method of presenting map details and device
CN104199594A (en) * 2014-09-28 2014-12-10 厦门幻世网络科技有限公司 Target position positioning method and device based on touch screen
CN104699709A (en) * 2013-12-09 2015-06-10 方正国际软件(北京)有限公司 Method and system for combined hierarchical display of multiple positioning points
CN105740291A (en) * 2014-12-12 2016-07-06 深圳市腾讯计算机系统有限公司 Map interface display method and device
US20170123534A1 (en) * 2015-11-04 2017-05-04 Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America Display zoom operation with both hands on steering wheel
CN108460725A (en) * 2018-03-22 2018-08-28 腾讯科技(深圳)有限公司 Map-indication method, device, equipment and storage medium
CN108664194A (en) * 2017-03-29 2018-10-16 中兴通讯股份有限公司 Display methods and device
US20190281411A1 (en) * 2017-01-12 2019-09-12 Tencent Technology (Shenzhen) Company Limited Interaction information obtaining method, interaction information setting method, user terminal, system, and storage medium
US20190347843A1 (en) * 2013-07-25 2019-11-14 Duelight Llc Systems and methods for displaying representative images
CN110568975A (en) * 2019-09-11 2019-12-13 珠海格力电器股份有限公司 Method for searching application program, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800048A (en) * 2012-07-06 2012-11-28 广州亿程交通信息有限公司 Electronic map scaling display method
CN104134398A (en) * 2013-05-03 2014-11-05 腾讯科技(深圳)有限公司 Method of presenting map details and device
US20190347843A1 (en) * 2013-07-25 2019-11-14 Duelight Llc Systems and methods for displaying representative images
CN104699709A (en) * 2013-12-09 2015-06-10 方正国际软件(北京)有限公司 Method and system for combined hierarchical display of multiple positioning points
CN104199594A (en) * 2014-09-28 2014-12-10 厦门幻世网络科技有限公司 Target position positioning method and device based on touch screen
CN105740291A (en) * 2014-12-12 2016-07-06 深圳市腾讯计算机系统有限公司 Map interface display method and device
US20170123534A1 (en) * 2015-11-04 2017-05-04 Panasonic Automotive Systems Company Of America, Division Of Panasonic Corporation Of North America Display zoom operation with both hands on steering wheel
US20190281411A1 (en) * 2017-01-12 2019-09-12 Tencent Technology (Shenzhen) Company Limited Interaction information obtaining method, interaction information setting method, user terminal, system, and storage medium
CN108664194A (en) * 2017-03-29 2018-10-16 中兴通讯股份有限公司 Display methods and device
CN108460725A (en) * 2018-03-22 2018-08-28 腾讯科技(深圳)有限公司 Map-indication method, device, equipment and storage medium
CN110568975A (en) * 2019-09-11 2019-12-13 珠海格力电器股份有限公司 Method for searching application program, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
河上街: ""解决div太大,取区域地图时遇到显示效果偏小,放大地图级别又会导致区域地图大小填满div大小"", 《HTTPS://BLOG.CSDN.NET/LIUJUCAI/ARTICLE/DETAILS/100029898》 *

Also Published As

Publication number Publication date
CN111240563B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN109862414B (en) Mask bullet screen display method and device and server
CN110069580B (en) Road marking display method and device, electronic equipment and storage medium
CN111080799A (en) Scene roaming method, system, device and storage medium based on three-dimensional modeling
US20160142471A1 (en) Systems and methods for facilitating collaboration among multiple computing devices and an interactive display device
US11561675B2 (en) Method and apparatus for visualization of public welfare activities
CN108829250A (en) A kind of object interaction display method based on augmented reality AR
US20230120293A1 (en) Method and apparatus for visualization of public welfare activities
CN108230434B (en) Image texture processing method and device, storage medium and electronic device
CN111798554A (en) Rendering parameter determination method, device, equipment and storage medium
CN111382223B (en) Electronic map display method, terminal and electronic equipment
CN111240563B (en) Information display method, device, equipment and storage medium
CN112099781A (en) Map visualization method and device, storage medium and equipment
US20240126568A1 (en) Method, apparatus, device, computer readable storage medium and product for pattern rendering
CN115546349B (en) Method for realizing proportion and position switching of map background map based on Openlayer
CN109766530B (en) Method and device for generating chart frame, storage medium and electronic equipment
CN111538726A (en) Information input method based on imaging and related equipment
CN111506280B (en) Graphical user interface for indicating off-screen points of interest
CN109522429A (en) Method and apparatus for generating information
CN109934734A (en) A kind of tourist attractions experiential method and system based on augmented reality
CN113935891B (en) Pixel-style scene rendering method, device and storage medium
CN114896525A (en) Information display method and device and electronic equipment
CN109559382A (en) Intelligent guide method, apparatus, terminal and medium
JP2020537741A (en) Dynamic styling of digital maps
CN110990501B (en) Three-dimensional road modeling method, device, electronic equipment and storage medium
CN115131531A (en) Virtual object display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024778

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant