CN112446823B - Monitoring image display method and device - Google Patents

Monitoring image display method and device Download PDF

Info

Publication number
CN112446823B
CN112446823B CN202110134312.1A CN202110134312A CN112446823B CN 112446823 B CN112446823 B CN 112446823B CN 202110134312 A CN202110134312 A CN 202110134312A CN 112446823 B CN112446823 B CN 112446823B
Authority
CN
China
Prior art keywords
image
coordinate
unfolding
dimensional
coordinate point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110134312.1A
Other languages
Chinese (zh)
Other versions
CN112446823A (en
Inventor
唐志斌
张凯
朱陈涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Tongda High New Technology Co Ltd
Original Assignee
Wuhan Zhongke Tongda High New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Tongda High New Technology Co Ltd filed Critical Wuhan Zhongke Tongda High New Technology Co Ltd
Priority to CN202110134312.1A priority Critical patent/CN112446823B/en
Publication of CN112446823A publication Critical patent/CN112446823A/en
Application granted granted Critical
Publication of CN112446823B publication Critical patent/CN112446823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/073
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The application provides a monitoring image display method and device for an intelligent traffic system, the intelligent traffic system comprises a front-end camera and a monitoring terminal with a screen, the monitoring terminal is connected with the front-end camera in a wireless connection mode, the monitoring terminal can respond to an image expansion trigger event of a user aiming at a three-dimensional columnar image by acquiring and displaying a to-be-processed three-dimensional columnar image, the three-dimensional columnar image is subjected to plane expansion processing based on a coordinate point distance between a three-dimensional columnar coordinate point and an expansion plane coordinate point to obtain a plane rectangular image, and then the plane rectangular image is subjected to rolling rendering according to size information of the plane rectangular image, so that the rendered plane rectangular image is acquired and displayed, and compared with a two-dimensional image display mode in the field of traditional video monitoring, the image display effect is better.

Description

Monitoring image display method and device
Technical Field
The application relates to the field of intelligent traffic, in particular to a monitoring image display method and device.
Background
With the rapid development of security industry, especially the rapid promotion of construction projects such as safe cities, intelligent traffic and the like, the demand of the fields such as finance, traffic and the like on a high-definition video monitoring platform is continuously increased, and video monitoring systems are more and more widely applied to the market.
In the field of traditional video monitoring, a panoramic monitoring image is usually displayed in a three-dimensional perspective projection manner, but this manner can only display a local position image in a global scene image, and cannot meet the monitoring requirement that a user fully grasps the whole environment from a macroscopic perspective.
Therefore, the existing monitoring image display method has the problem of poor image display effect.
Disclosure of Invention
The embodiment of the application provides a monitoring image display method and device for an intelligent traffic system, which are used for solving the technical problem of poor image display effect in the current monitoring image display technology.
In a first aspect, an embodiment of the present application provides a monitoring image displaying method for an intelligent traffic system, where the intelligent traffic system includes a front-end camera and a monitoring terminal having a screen, and the monitoring image displaying method includes:
acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a bent target monitoring image, and the target monitoring image is an image acquired by the front-end camera;
responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting the coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in an equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
obtaining and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image;
and displaying the rendered plane rectangular image.
In a second aspect, an embodiment of the present application further provides a monitoring image display device for an intelligent traffic system, where the intelligent traffic system includes a front-end camera and a monitoring terminal having a screen, and the monitoring image display device includes:
the image acquisition module is used for acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a bent target monitoring image, and the target monitoring image is an image acquired by the front-end camera;
the image unfolding module is used for responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting the coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
the image rendering module is used for acquiring and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image;
and the image display module is used for displaying the rendered planar rectangular image.
In a third aspect, an embodiment of the present application further provides a computer device, including a memory and a processor; the memory stores an application program, and the processor is configured to run the application program in the memory to perform any one of the operations in the monitoring image displaying method.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium for an intelligent transportation system, where the computer-readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in the monitoring image displaying method.
Has the advantages that: the embodiment of the application provides a monitoring image display method and device for an intelligent traffic system, wherein the intelligent traffic system comprises a front-end camera and a monitoring terminal with a screen, the monitoring terminal can respond to an image expansion trigger event of a user aiming at a three-dimensional columnar image by acquiring and displaying a to-be-processed three-dimensional columnar image, and performs plane expansion processing on the three-dimensional columnar image based on a coordinate point distance between a three-dimensional columnar coordinate point and an expansion plane coordinate point to obtain a plane rectangular image, so that the plane rectangular image is subjected to rolling rendering according to size information of the plane rectangular image to obtain a rendered plane rectangular image, and the rendered plane rectangular image is displayed. By adopting the method, the user can look up the video monitoring scene picture in an all-around way without frequently carrying out reduction and amplification operations on the monitored image, thereby effectively improving the display effect of the video monitoring image.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of a monitoring image displaying method for an intelligent transportation system according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a monitoring image displaying method for an intelligent transportation system according to an embodiment of the present disclosure.
Fig. 3 is an expanded schematic view of a three-dimensional columnar image according to an embodiment of the present application.
Fig. 4 is another expanded schematic view of a three-dimensional columnar image provided in an embodiment of the present application.
Fig. 5 is an interface schematic diagram for determining an image expansion position according to an embodiment of the present application.
Fig. 6 is an interface schematic diagram of a planar rectangular image in a monitoring terminal according to an embodiment of the present application.
Fig. 7 is a schematic view of scroll rendering of a planar rectangular image according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a monitoring image display device according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the application, the smart transportation refers to a transportation-oriented service system which fully utilizes modern electronic information technologies such as internet of things, cloud computing, artificial intelligence, automatic control, mobile internet and the like in the transportation field; the intelligent traffic system is a 'high-efficiency, safe, environment-friendly, comfortable and civilized' intelligent traffic and transportation system established by taking a national intelligent traffic system framework as guidance, greatly improves the management level and the operation efficiency of the urban traffic and transportation system, and provides all-round traffic information service and convenient, efficient, quick, economic, safe, humanized and intelligent traffic and transportation service for travelers.
In the embodiments of the present application, the word "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "for example" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the invention. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and processes are not shown in detail to avoid obscuring the description of the invention with unnecessary detail. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
It should be noted that, since the method in the embodiment of the present application is executed in the monitoring terminals, the processing objects of each monitoring terminal exist in the form of data or information, for example, time, which is substantially time information, it can be understood that, in the subsequent embodiments, if the size, the number, the position, and the like are mentioned, corresponding data exist, so that the monitoring terminals perform processing, and details are not described herein.
The embodiment of the application provides a monitoring image display method and a monitoring image display device for an intelligent traffic system, which are respectively described in detail below.
Referring to fig. 1, fig. 1 is a schematic view of a scene of a monitoring image displaying method for an intelligent traffic system according to an embodiment of the present disclosure, where the system may include a front-end camera 11 and a monitoring terminal 12, and the front-end camera 11 and the monitoring terminal 12 may be connected and communicated through an internet formed by various gateways, such as a wide area network and a local area network, which are not described herein again. It is understood that the front-end camera 11 includes, but is not limited to, a camera of a front-end device such as an embedded high-definition video camera, an industrial personal computer, a high-definition camera, and the like, and is used for information acquisition, encoding, processing, storage, transmission, security control, and the like. The monitoring terminal 12 is a client device registered and authorized by the intelligent transportation system and having an operation demand on data and devices in the system, and may specifically include a client for a traffic police, a client for a developer, and the like, and the monitoring terminal 12 may be a device including receiving and transmitting hardware, that is, a device having receiving and transmitting hardware capable of performing bidirectional communication on a bidirectional communication link. Such a device may include: the cellular or other communication device, which may have a single line display or a multi-line display or may not have a multi-line display, may specifically be one of a desktop terminal or a mobile terminal, such as a mobile phone, a tablet computer, a notebook computer, etc.
It should be further noted that the scenario diagram of the intelligent traffic system shown in fig. 1 is only an example, and the intelligent traffic system and the scenario described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not form a limitation on the technical solution provided in the embodiment of the present invention. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, in one embodiment, a monitoring image displaying method for an intelligent traffic system is provided. The embodiment is mainly illustrated by applying the method to the monitoring terminal 12 in fig. 1. Referring to fig. 2, the method for displaying the monitoring image specifically includes steps S201 to S204, which are specifically as follows:
s201, acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a target monitoring image after bending processing, and the target monitoring image is an image acquired by the front-end camera.
The target monitoring image can be an image collected by a front-end camera 11 preset at any position, and the image can be an image obtained through a photographing mode or an image extracted from a monitoring video obtained through a video recording mode; the arbitrary location may be any location where a surrounding scene needs to be monitored, such as a train station, a highway, a street, etc. It can be understood that the front-end camera 11 can acquire a plurality of images, but only one image can be displayed in a three-dimensional column shape, and the image is a target monitoring image. The selection and determination of the target monitoring image can be selected and determined by a worker with a monitoring image display requirement in some way, for example: (1) the monitoring terminal 12 displays thumbnails of a plurality of images through an interactive interface, and a worker can select to submit any one of the images, so that the selected image is the target monitoring image; (2) the monitoring terminal 12 performs image recognition on the image or video sent by the front-end camera 11 through a preset image recognition algorithm, and if a certain frame of image is recognized to include a preset target object or behavior, the image can be determined as a target monitoring image, for example, an image including prohibited articles or a video including dangerous behaviors is recognized; (3) the monitoring terminal 12 extracts and displays the images or videos sent by the front-end camera 11 at regular time based on an image timing display rule preset by a worker, wherein the images extracted at a target time point are target monitoring images; (4) the interactive interface of the monitoring terminal 12 displays a plurality of virtual buttons, each virtual button corresponds to one front-end camera 11, the staff triggers the virtual button corresponding to one of the front-end cameras 11, that is, submits the image acquired by the front-end camera 11 to the monitoring terminal 12 as a target monitoring image, and if the image acquired by the front-end camera 11 is a video, the monitoring terminal 12 can perform random frame extraction, frame extraction identification and frame extraction at regular time on the video to determine the target monitoring image. It should be noted that, although the embodiment only exemplifies four target monitoring image obtaining manners, it is not excluded that in other embodiments, the target monitoring image may be obtained by superimposing, replacing, and the like the four manners. The embodiment of the present application does not limit the specific acquiring method of the target monitoring image, but details of one of the acquiring methods of the target monitoring image will be described below.
Specifically, after the monitoring terminal 12 obtains the target monitoring image, the target monitoring image may be subjected to bending processing of the fitting model based on the preset three-dimensional columnar model, so that the finally processed target monitoring image may be shaped as a hollow cylinder, that is, a hollow cylinder without top surfaces of an upper cylinder and a lower cylinder. It can be understood that, in the processed three-dimensional cylindrical image, the outer side surface of the cylinder should be displayed as a picture in the target monitoring image, and the inner side surface of the cylinder can display a pure surface with any preset ground color, such as gray, black, white, and the like. After the monitoring terminal 12 acquires and displays the three-dimensional columnar image to the worker through the interaction interface, the monitoring terminal can wait for the trigger instruction of the worker, so that the three-dimensional columnar image is processed continuously according to the corresponding trigger instruction, and the next omnibearing display of the target monitoring image is realized.
It should be noted that, the user/worker referred to in the embodiment of the present application may be a developer of the monitoring terminal 12, or may be an insider of the public security system.
In one embodiment, this step includes: acquiring at least one monitoring image through the front-end camera to obtain a thumbnail image corresponding to the at least one monitoring image; displaying a thumbnail image corresponding to the at least one monitoring image; determining a target monitoring image in the at least one monitoring image in response to a user selected trigger event for the thumbnail image; based on a preset three-dimensional columnar display model, bending the target monitoring image to obtain a three-dimensional columnar image; and displaying the three-dimensional columnar image.
The thumbnail image is an image obtained by reducing the monitoring image according to a certain proportion, and the image content and the content size of the thumbnail image are consistent and inconsistent; the monitoring image refers to an image collected by the front-end camera 11.
The selection trigger event refers to an image selection instruction submitted by a user through the monitoring terminal 12, a mouse or a touch pen at a certain moment in the terminal interface, and the operation may be a click operation, a double click operation or a long press operation. For example, one or more images are displayed on the interactive interface of the monitoring terminal 12, and when a user clicks a certain image, an image selection instruction for the image is submitted to the monitoring terminal 12, and this operation process is a selected trigger event for the image.
The three-dimensional columnar display model is a virtual model presenting a three-dimensional cylindrical shape, and can be used for displaying a three-dimensional columnar image.
Specifically, the monitoring terminal 12 may obtain one or more monitoring images through the pre-connected front-end camera 11, and scale down the obtained monitoring images to obtain corresponding thumbnail images. The purpose of the monitoring terminal 12 obtaining the thumbnail image corresponding to the monitoring image is to promote the interactive interface of the monitoring terminal 12 to display more monitoring images to the maximum, and avoid the frequent operation of the user to refer to each monitoring image.
For example, the interactive interface of the monitoring terminal 12 may display one monitoring image, but may simultaneously display thumbnails of 6 monitoring images.
More specifically, after the monitoring terminal 12 acquires and displays the thumbnail image corresponding to the monitoring image, it may detect and respond to a trigger event selected by the user for a certain thumbnail image, determine a target monitoring image selected and submitted by the user, and then perform bending processing on the target monitoring image by using the three-dimensional columnar display model to acquire the three-dimensional columnar image. It can be understood that, although how to perform the bending processing on the target monitoring image to obtain the three-dimensional cylindrical image is not described in detail in this embodiment, how to perform the unfolding processing on the three-dimensional cylindrical image to obtain the corresponding planar rectangular image will be described in detail below in the monitoring image displaying method provided by the embodiment of the present application. That is, the process from the target monitoring image to the three-dimensional columnar image is actually the reverse process from the three-dimensional columnar image to the planar rectangular image, and the processing steps are the same, but the execution sequence is reverse. The application provides and changes target monitoring image into three-dimensional column image, changes the three-dimensional column image into plane rectangle image again, and aim at provides the intermediate effect of image display for the user, helps the user more conveniently, look over monitoring image more all-roundly, improves current image display mode because of there being image display mode single, and lead to the poor problem of monitoring image display effect.
S202, responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting a coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in an equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; and the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is on the same circular surface with the three-dimensional columnar coordinate point.
The image unfolding position refers to a position where the image is split, that is, the image splitting line shown in fig. 3 described above.
The three-dimensional cylindrical coordinate point and the expansion plane coordinate point are coordinate points determined based on a preset image coordinate system. For example, a three-dimensional cylindrical coordinate point refers to a coordinate point in an image coordinate system at a specified pixel location in a three-dimensional cylindrical image; for example, the developed planar coordinate point is a coordinate point in the image coordinate system based on a pixel position in the planar rectangular image corresponding to a specified pixel position in the three-dimensional columnar image after the three-dimensional columnar image is developed into the planar rectangular image.
The coordinate point distance refers to a distance value between any two coordinate positions. For example, (x, y, z) of the three-dimensional cylindrical coordinate point in the preset image coordinate system is (0, 2, 5), and assuming that (x, y, z) of the unfolded plane coordinate point after the image is unfolded is (-6, -2, 5), the corresponding coordinate point distance is about 7.
Specifically, after the monitoring terminal 12 acquires and displays the three-dimensional cylindrical image corresponding to the target monitoring image, it may receive an image expansion triggering instruction of the user for the three-dimensional cylindrical image through its interactive interface, and in response to the instruction, determine a coordinate point of the three-dimensional cylindrical image in a preset image coordinate system, that is, a three-dimensional cylindrical coordinate point, and perform plane expansion processing on the three-dimensional cylindrical image based on a coordinate point distance between the three-dimensional cylindrical coordinate point and a preset expansion plane coordinate point, so as to acquire a plane rectangular image obtained after the three-dimensional cylindrical image is expanded. It can be understood that the unfolding plane coordinate point referred to in this embodiment is a preset unfolding plane coordinate point, and the coordinate point may be a preset fixed coordinate point, or a relative coordinate point estimated after the unfolding according to the image dimensions such as the length, the width, and the like of the three-dimensional columnar image. The image expansion step in the present embodiment will be described in detail below, and reference may be made to fig. 3 for an expansion flow of the three-dimensional columnar image.
In one embodiment, this step includes: responding to an image expansion triggering event of a user aiming at the three-dimensional columnar image, and determining the coordinates of a target point to be split in the three-dimensional columnar image; the target point coordinate is one of the coordinates of the end points of the two lines on the split line corresponding to the image unfolding position; determining a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the coordinates of the target point, wherein the unfolding plane coordinate point is a preset unfolding plane coordinate point; determining the expansion position coordinates of the three-dimensional columnar image according to the coordinate point distance and the preset image expansion times; the image expansion times are greater than or equal to two; and carrying out plane expansion processing on the three-dimensional columnar image based on the expansion position coordinates to obtain a plane rectangular image.
The coordinates of the target point are coordinates of a position to be split in the three-dimensional columnar image, for example, one of line end coordinates of a split line of the image shown in fig. 3.
Specifically, after detecting an image expansion triggering instruction of the user for the three-dimensional columnar image, the monitoring terminal 12 may first determine coordinates of a target point in the three-dimensional columnar image, and the determining step may be set according to actual application requirements, for example: (1) determining according to the image splitting position triggered and selected by the user on the three-dimensional columnar image; (2) determined according to a default image splitting position, such as an image centering position and the like; (3) the position between two target objects (e.g., two persons) such as a background image portion without a real object is determined according to the image content recognition result. It should be understood that, although the present embodiment exemplifies the determination manners of the coordinates of the three target points, it is not excluded that the three manners may be combined, superimposed, split, and the like in other embodiments, and the specific determination manner of the coordinates of the target points may be determined according to the actual application requirements.
More specifically, after the monitor terminal 12 determines the coordinates of the target point in the three-dimensional cylindrical image, the coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point may be further analytically determined. The expansion plane coordinate point described in the foregoing embodiment may be a preset expansion plane coordinate point, and the coordinate point may be a preset fixed coordinate point, or a relative coordinate point after expansion may be estimated according to the image dimensions such as the length, the width, and the like of the three-dimensional columnar image. Therefore, the monitoring terminal 12 may obtain a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point after determining the unfolding plane coordinate point corresponding to the target point coordinate, that is, the three-dimensional cylindrical coordinate point at this time is the target point coordinate.
Further, after the monitoring terminal 12 determines the coordinate point distance between the coordinate point of the target point and the coordinate point of the expansion plane, the expansion position coordinate of the three-dimensional columnar image existing on the straight line of the distance corresponding to the coordinate line distance in the expansion process can be determined based on the preset image expansion times, and the expansion position coordinate is the coordinate point to be overlapped with the coordinate point of the target point in the image expansion process. Specifically, referring to fig. 4, a schematic diagram of a change from the coordinates "a" of the target point to the coordinates "a" of the unfolding plane in one embodiment is shown. It should be noted that the circular area shown in fig. 4 (a) is one of the circular top surfaces of the three-dimensional histogram, i.e., the top surface of the three-dimensional histogram shown on the left side in fig. 3. Further, although the present embodiment has simply explained three ways of determining the coordinates of the target point, specific target point coordinate determining step, coordinate point distance determining step, and deployment position coordinate determining step will be explained in detail below.
In one embodiment, the step of determining coordinates of a target point to be split in the three-dimensional columnar image in response to a user image expansion trigger event for the three-dimensional columnar image includes: responding to an image expansion triggering event of the three-dimensional columnar image by the user, and acquiring an image expansion position determined by the user; if the image unfolding position is a default position, determining the coordinates of the line end points on a preset splitting line in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image; and if the image unfolding position is a calibration position, determining the coordinates of the line end points on a calibration splitting line determined by a user in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image.
The image unfolding position refers to a position where the image is split, that is, the image splitting line shown in fig. 3 described above.
Specifically, the monitoring terminal 12 may have an interactive interface through which the three-dimensional columnar image is displayed, and after the user refers to the image and submits the image expansion triggering instruction for the three-dimensional columnar image through the interactive interface, the monitoring terminal 12 may respond to the image expansion triggering event corresponding to the image expansion triggering instruction, and further obtain the image expansion position determined and submitted by the user through the interactive interface.
Further, for the acquisition mode of the image expansion position, at least two virtual buttons are displayed on the interactive interface, and each button corresponds to one optional position, including a "default position" and a "calibration position". The user triggers a certain button, that is, determines and submits the image expansion position corresponding to the button to the monitoring terminal 12, and the button triggering mode may be click triggering, double-click triggering, long-press triggering, and the like.
More specifically, if the virtual button triggered by the user is the "default position", the monitoring terminal 12 may determine that a line end coordinate on a preset splitting line in the three-dimensional columnar image is a target point coordinate; if the virtual button triggered by the user is the "calibration position", the monitoring terminal 12 may determine that the line end point coordinates on the calibration splitting line determined by the user in the three-dimensional columnar image are the target point coordinates. Fig. 5 is a schematic interface diagram of the image expansion position determination step described in this embodiment. It should be noted that, when the virtual button triggered by the user is "calibration position", the monitoring terminal 12 provides a selection cursor, the selection cursor is displayed through an interactive interface, the displayed cursor may be in a split selection line for the three-dimensional columnar image, that is, as shown by a cursor line in fig. 5, four virtual buttons "up", "down", "left" and "right" in fig. 5 are provided for the user to adjust the position of the cursor line.
In one embodiment, the step of determining a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the target point coordinates comprises: determining the coordinates of the target point as the three-dimensional columnar coordinate point; determining a cylindrical circular plane where the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point are located; and calculating the distance value between the three-dimensional columnar coordinate point and the coordinate point of the expansion plane based on the columnar circular plane to obtain the coordinate point distance.
Specifically, before the monitoring terminal 12 determines the coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point, the target point coordinate may be regarded as the three-dimensional cylindrical coordinate point as described in the above embodiment, and then the cylindrical circular plane where the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point are located is first determined as the target plane. It will be appreciated that since the predetermined image coordinate system is a three-dimensional coordinate system, three-dimensional cylindrical images may have three planes in the coordinate system, namely two circular planes (a circular top plane and a circular bottom plane) formed by the edges of the images, and cylindrical side planes. Therefore, the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point may exist in a plurality of planes at the same time, and in order to determine the three-dimensional cylindrical coordinate point and the unfolding plane, and the coordinate point distance between the coordinate points, it is necessary to first determine any circular plane in which the two coordinate points are located, as a cylindrical circular plane, and the coordinate point distance determined based on the cylindrical circular plane is the coordinate point distance required by the embodiment.
In one embodiment, the step of determining the coordinates of the expansion position of the three-dimensional columnar image according to the coordinate point distance and the preset image expansion times includes: based on preset image expansion times, carrying out equal-proportion splitting processing on the coordinate point distance to obtain at least one split point coordinate; determining the coordinates of the at least one splitting point as first unfolding position coordinates of the target point; acquiring a second unfolding position coordinate of each intermediate point coordinate based on the first unfolding position coordinate of the target point coordinate; the second unfolding position coordinate is a splitting point coordinate corresponding to the intermediate point coordinate; and determining the first unfolding position coordinate and the second unfolding position coordinate as the unfolding position coordinate of the three-dimensional columnar image.
The middle point coordinates may be point coordinates that are located on a circular top surface and are equidistantly disposed on a boundary of the top surface in the three-dimensional cylindrical image, and are located on the same circular top surface as the target point coordinates, that is, as shown in (b) of fig. 4, only the middle point coordinates of the left half of the top surface of the three-dimensional cylindrical image are shown, and the operation manner of the middle point coordinates of the right half is consistent with that of the left half, so that the right half image has been omitted.
Specifically, after the monitoring terminal 12 determines the coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the target point coordinates, the coordinate point distance may be split in equal proportion based on the preset image unfolding times to obtain at least one splitting point coordinate.
For example, the current coordinate point distance is "5", and if the preset image expansion frequency is "2", the split point coordinate number is "1"; if the preset image expansion times are 3, the number of the split point coordinates is 2, and so on. It is understood that, in order to improve the image expansion efficiency, although the number of times of image expansion may be set to "1", the image expansion quality will be affected therewith, resulting in poor image expansion effect. Therefore, the application proposes that the preset image expansion times are more than or equal to 2', so as to obtain at least one splitting point coordinate. Although the simulation approximation degree is only about 70% by the image expansion method, the difference is almost negligible for the user because the image expansion process is very short.
More specifically, after the monitoring terminal 12 obtains the coordinates of at least one splitting point, the coordinates of the at least one splitting point can be used as the coordinates of a first unfolding position corresponding to the coordinates of the target point, where the coordinates of the first unfolding position are the coordinates of a node of a movement path of the coordinates of the target point, that is, the coordinates of the target point need to be overlapped with the coordinates of the first unfolding position in the process of unfolding the image.
Further, after the monitoring terminal 12 determines the first unfolding position coordinate, the split point coordinate corresponding to each intermediate point coordinate may be obtained based on the same proportion as the second unfolding position coordinate, and finally the unfolding position coordinate of the three-dimensional columnar image is obtained.
In one embodiment, the step of performing plane expansion processing on the three-dimensional columnar image based on the coordinates of the expansion position to obtain a planar rectangular image includes: determining a middle point coordinate corresponding to a target point coordinate to be split in the three-dimensional columnar image; moving the target coordinate point based on the first unfolding position coordinate, and sequentially moving the intermediate point coordinate based on the second unfolding position coordinate to perform plane unfolding processing on the three-dimensional columnar image until the target coordinate point is overlapped with the unfolding plane coordinate point, wherein a straight line formed by connecting the target coordinate point with the intermediate point coordinate is vertical to a preset coordinate axis; and determining the three-dimensional columnar image after the plane expansion processing as the plane rectangular image.
Specifically, the present embodiment will use one of the circular top surfaces of the three-dimensional cylindrical image as the viewing angle, and will describe in detail how the coordinates of the target point and the intermediate point existing in the top surface move, so that the three-dimensional cylindrical image can be expanded into a planar rectangular image, as shown in fig. 4. The diagram (b) in fig. 4 indicates 4 intermediate point coordinates "a, b, c, d" respectively, and a target point coordinate "a" to be split, the first expansion position coordinates of the target point coordinate "a" are 2, respectively, "a'" and "a", and the second expansion position coordinates of the intermediate point coordinates "a, b, c, d" are not shown in fig. 4, but may refer to the setting value of the first expansion position coordinates.
For example, in the image coordinate system shown in fig. 4, the horizontal axis is the positive X-axis rightward, the vertical axis is the positive Z-axis upward, the radius of the circular top surface in the three-dimensional columnar image is "R", the perimeter of the circular top surface is "2 × PI × R", the initial coordinate value of the target point coordinate "a" is (0, R, y), the coordinate value of the unfolding plane coordinate point "a × is (-PI × R, -R, y), and the initial coordinate value (X, Z, y) of the 4 intermediate point coordinates" a, b, c, d "is in an equal proportion range from 0 to (-R), the Z-axis value range is in an equal proportion range from R to (-R), and the y-axes have the same value. If the three-dimensional columnar image is expanded 3 times, and the coordinate point distances between (0, R, y) and (-PI x R, -R, y) are subjected to equal proportion splitting processing, the coordinate values of the first expansion position coordinates "a '" and "a'" are ((-PI x R)/3, R-2R/3, y), (((-PI x R)/3 x 2, -R +2R/3, y), respectively, the second expansion position coordinates can be changed in equal proportion based on the first expansion position coordinates, that is, the corresponding relationship between the second expansion position coordinates and the first expansion position coordinates is equal to the corresponding relationship between the target point coordinates and the expansion plane coordinate points, the monitoring terminal 12 only needs to sequentially move the target coordinate points based on the first expansion position coordinates and sequentially move each intermediate point coordinate based on the second expansion position coordinates, the three-dimensional columnar image can be unfolded to a planar state, and a planar rectangular image is obtained. And the final plane rectangular image shows a state that a straight line formed by connecting the target coordinate point 'A' and the expansion plane coordinate point 'A' is overlapped and the target coordinate point 'A' and the intermediate point coordinates 'a, b, c and d' is vertical to the Z axis.
It should be noted that, since the embodiment only shows the expansion of one circular top surface, the Y-axis coordinate thereof is fixed and unchanged, and each coordinate on the Y-axis can be moved correspondingly according to the above, the specific coordinate value of the Y-axis is not specified in the embodiment.
S203, obtaining and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image.
The size information of the planar rectangular image is length and width size information of the planar rectangular image.
Specifically, the monitoring terminal 12 performs plane expansion processing on the three-dimensional cylindrical image to obtain a planar rectangular image, and then further obtains size information of the planar rectangular image, and performs scroll rendering on the planar rectangular image according to the size information and the terminal interface size of the monitoring terminal 12 to obtain a rendered planar rectangular image for display. The image rendering steps involved in the present embodiment will be described in detail below.
In an embodiment, the step of obtaining and performing scroll rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image includes: acquiring size information of the planar rectangular image; if the size information is larger than the preset terminal interface size, performing size adjustment processing on the planar rectangular image to enable the first broadside size of the planar rectangular image to be equal to the second broadside size in the terminal interface size; and responding to the image display triggering operation of the user aiming at the planar rectangular image, and performing rolling rendering on the planar rectangular image after the size adjustment processing to obtain the rendered planar rectangular image.
Wherein, rendering in computer graphics refers to the process of generating images from models by software.
Specifically, the step of performing scroll rendering on the planar rectangular image by the monitoring terminal 12 to obtain a rendered planar rectangular image is specifically to obtain length and width dimension information of the planar rectangular image, and if any one of the obtained length and width dimensions is larger than a preset terminal interface dimension, performing size adjustment processing on the planar rectangular image to make a first width dimension of the planar rectangular image equal to a second width dimension of the terminal interface dimension, and when the dimension is adjusted to a long side or a wide side of an image that can be completely displayed by the monitoring terminal 12 at least, the monitoring terminal 12 may respond to an image display triggering operation of a user for the planar rectangular image, perform scroll rendering on the size-adjusted planar rectangular image, and obtain the rendered planar rectangular image.
It should be noted that, in the embodiment of the present application, a UV scrolling rendering manner is provided to perform scrolling rendering on a planar rectangular image, as shown in fig. 6, generally speaking, the size of a terminal interface of the monitoring terminal 12 is fixed, when a high-definition picture, such as a 2k picture or a 4k picture, is unfolded, the whole picture cannot be completely viewed on one interface, at this time, the whole picture can be displayed by using the UV scrolling rendering manner, and the detailed description can refer to the following.
And S204, displaying the rendered plane rectangular image.
In one embodiment, the step of presenting the rendered planar rectangular image comprises: and displaying a target image area in the rendered planar rectangular image in response to an image movement trigger event of the user for the rendered planar rectangular image.
The target image area may refer to a local image area in the planar rectangular image.
Specifically, as described in the foregoing embodiment of the UV scrolling rendering method, as shown in fig. 7, the terminal interface size of the monitoring terminal 12 is smaller than the length and width size of the planar rectangular image (the terminal interface long side size is smaller than the long side size of the planar rectangular image), the current monitoring terminal 12 can only display 70% of the planar rectangular image through its interface, and the remaining 30% of the planar rectangular image cannot be displayed simultaneously, so that the monitoring terminal 12 can directly change the UV coordinates when detecting the operation-image movement trigger instruction of the user, and can display the remaining 30% of the planar rectangular image.
According to the monitoring image display method, the monitoring terminal can respond to an image expansion triggering event of a user for the three-dimensional columnar image by acquiring and displaying the three-dimensional columnar image to be processed, performs plane expansion processing on the three-dimensional columnar image based on the coordinate point distance between the three-dimensional columnar coordinate point and the expansion plane coordinate point to obtain a plane rectangular image, further acquires and performs rolling rendering on the plane rectangular image according to the size information of the plane rectangular image to obtain the rendered plane rectangular image, and displays the rendered plane rectangular image. By adopting the method, the user can look up the video monitoring scene picture in an all-around way without frequently carrying out reduction and amplification operations on the monitored image, thereby effectively improving the display effect of the video monitoring image.
On the basis of the method in the foregoing embodiment, the present embodiment will be further described from the perspective of a monitoring image display apparatus, please refer to fig. 8, where fig. 8 specifically describes the monitoring image display apparatus provided in the embodiment of the present application, which may include:
the image acquisition module 810 is configured to acquire and display a three-dimensional columnar image to be processed, where the three-dimensional columnar image is a target monitoring image after bending processing, and the target monitoring image is an image acquired by the front-end camera;
an image unfolding module 820, configured to respond to an image unfolding trigger event of a user for the three-dimensional cylindrical image, obtain an image unfolding position determined by the user, obtain a three-dimensional cylindrical coordinate point on the image unfolding position, split a coordinate point distance between the three-dimensional cylindrical coordinate point and a preset unfolding plane coordinate point in an equal proportion, obtain an unfolding position coordinate, and perform plane unfolding processing on the three-dimensional cylindrical image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
the image rendering module 830 is configured to obtain and perform rolling rendering on the planar rectangular image according to the size information of the planar rectangular image, so as to obtain a rendered planar rectangular image;
an image display module 840, configured to display the rendered planar rectangular image.
In an embodiment, the image obtaining module 810 is further configured to obtain at least one monitoring image through the front-end camera, and obtain a thumbnail image corresponding to the at least one monitoring image; displaying a thumbnail image corresponding to the at least one monitoring image; determining a target monitoring image in the at least one monitoring image in response to a user selected trigger event for the thumbnail image; based on a preset three-dimensional columnar display model, bending the target monitoring image to obtain a three-dimensional columnar image; and displaying the three-dimensional columnar image.
In one embodiment, the image unfolding module 820 is further configured to determine coordinates of a target point to be split in the three-dimensional columnar image in response to an image unfolding trigger event for the three-dimensional columnar image by a user; the target point coordinate is one of the coordinates of the end points of the two lines on the split line corresponding to the image unfolding position; determining a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the coordinates of the target point, wherein the unfolding plane coordinate point is a preset unfolding plane coordinate point; determining the expansion position coordinates of the three-dimensional columnar image according to the coordinate point distance and the preset image expansion times; the image expansion times are greater than or equal to two; and carrying out plane expansion processing on the three-dimensional columnar image based on the expansion position coordinates to obtain a plane rectangular image.
In one embodiment, the image unfolding module 820 is further configured to obtain an image unfolding position determined by a user in response to an image unfolding trigger event of the three-dimensional columnar image by the user; if the image unfolding position is a default position, determining the coordinates of the line end points on a preset splitting line in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image; and if the image unfolding position is a calibration position, determining the coordinates of the line end points on a calibration splitting line determined by a user in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image.
In one embodiment, the image unfolding module 820 is further configured to determine the coordinates of the target point as the three-dimensional cylindrical coordinate point; determining a cylindrical circular plane where the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point are located; and calculating the distance value between the three-dimensional columnar coordinate point and the coordinate point of the expansion plane based on the columnar circular plane to obtain the coordinate point distance.
In an embodiment, the three-dimensional cylindrical image includes a preset number of coordinates of intermediate points, and the image expansion module 820 is further configured to perform equal-proportion splitting processing on the coordinate point distance based on a preset number of times of image expansion to obtain at least one split point coordinate; determining the coordinates of the at least one splitting point as first unfolding position coordinates of the target point; acquiring a second unfolding position coordinate of each intermediate point coordinate based on the first unfolding position coordinate of the target point coordinate; the second unfolding position coordinate is a splitting point coordinate corresponding to the intermediate point coordinate; and determining the first unfolding position coordinate and the second unfolding position coordinate as the unfolding position coordinate of the three-dimensional columnar image.
In an embodiment, the unfolding position coordinates include a first unfolding position coordinate and a second unfolding position coordinate, and the image unfolding module 820 is further configured to determine an intermediate point coordinate corresponding to a coordinate of a target point to be split in the three-dimensional columnar image; moving the target coordinate point based on the first unfolding position coordinate, and sequentially moving the intermediate point coordinate based on the second unfolding position coordinate to perform plane unfolding processing on the three-dimensional columnar image until the target coordinate point is overlapped with the unfolding plane coordinate point, wherein a straight line formed by connecting the target coordinate point with the intermediate point coordinate is vertical to a preset coordinate axis; and determining the three-dimensional columnar image after the plane expansion processing as the plane rectangular image.
In one embodiment, the image rendering module 830 is further configured to obtain size information of the planar rectangular image; if the size information is larger than the preset terminal interface size, performing size adjustment processing on the planar rectangular image to enable the first broadside size of the planar rectangular image to be equal to the second broadside size in the terminal interface size; and responding to the image display triggering operation of the user aiming at the planar rectangular image, and performing rolling rendering on the planar rectangular image after the size adjustment processing to obtain the rendered planar rectangular image.
In one embodiment, the image display module 840 is further configured to display the target image area in the rendered planar rectangular image in response to an image movement trigger event of the user for the rendered planar rectangular image.
In the above embodiment, the monitoring terminal may respond to an image expansion trigger event of a user for the three-dimensional cylindrical image by acquiring and displaying the three-dimensional cylindrical image to be processed, perform plane expansion processing on the three-dimensional cylindrical image based on a coordinate point distance between a three-dimensional cylindrical coordinate point and an expansion plane coordinate point to obtain a plane rectangular image, further acquire and perform rolling rendering on the plane rectangular image according to size information of the plane rectangular image to obtain a rendered plane rectangular image, and display the rendered plane rectangular image. By adopting the method, the user can look up the video monitoring scene picture in an all-around way without frequently carrying out reduction and amplification operations on the monitored image, thereby effectively improving the display effect of the video monitoring image.
In summary, for the specific limitations of the monitoring image displaying apparatus, reference may be made to the above limitations on the monitoring image displaying method, which is not described herein again. The modules in the monitoring image display device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, the present application further provides a computer device, which may be a monitoring terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of monitoring image presentation. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a bent target monitoring image, and the target monitoring image is an image acquired by the front-end camera;
responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting the coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in an equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
obtaining and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image;
and displaying the rendered plane rectangular image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring at least one monitoring image through the front-end camera to obtain a thumbnail image corresponding to the at least one monitoring image;
displaying a thumbnail image corresponding to the at least one monitoring image;
determining a target monitoring image in the at least one monitoring image in response to a user selected trigger event for the thumbnail image;
based on a preset three-dimensional columnar display model, bending the target monitoring image to obtain a three-dimensional columnar image;
and displaying the three-dimensional columnar image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
responding to an image expansion triggering event of a user aiming at the three-dimensional columnar image, and determining the coordinates of a target point to be split in the three-dimensional columnar image; the target point coordinate is one of the coordinates of the end points of the two lines on the split line corresponding to the image unfolding position;
determining a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the coordinates of the target point, wherein the unfolding plane coordinate point is a preset unfolding plane coordinate point;
determining the expansion position coordinates of the three-dimensional columnar image according to the coordinate point distance and the preset image expansion times; the image expansion times are greater than or equal to two;
and carrying out plane expansion processing on the three-dimensional columnar image based on the expansion position coordinates to obtain a plane rectangular image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
responding to an image expansion triggering event of the three-dimensional columnar image by the user, and acquiring an image expansion position determined by the user;
if the image unfolding position is a default position, determining the coordinates of the line end points on a preset splitting line in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image;
and if the image unfolding position is a calibration position, determining the coordinates of the line end points on a calibration splitting line determined by a user in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the coordinates of the target point as the three-dimensional columnar coordinate point;
determining a cylindrical circular plane where the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point are located;
and calculating the distance value between the three-dimensional columnar coordinate point and the coordinate point of the expansion plane based on the columnar circular plane to obtain the coordinate point distance.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
based on preset image expansion times, carrying out equal-proportion splitting processing on the coordinate point distance to obtain at least one split point coordinate;
determining the coordinates of the at least one splitting point as first unfolding position coordinates of the target point;
acquiring a second unfolding position coordinate of each intermediate point coordinate based on the first unfolding position coordinate of the target point coordinate; the second unfolding position coordinate is a splitting point coordinate corresponding to the intermediate point coordinate;
and determining the first unfolding position coordinate and the second unfolding position coordinate as the unfolding position coordinate of the three-dimensional columnar image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining a middle point coordinate corresponding to a target point coordinate to be split in the three-dimensional columnar image;
moving the target coordinate point based on the first unfolding position coordinate, and sequentially moving the intermediate point coordinate based on the second unfolding position coordinate to perform plane unfolding processing on the three-dimensional columnar image until the target coordinate point is overlapped with the unfolding plane coordinate point, wherein a straight line formed by connecting the target coordinate point with the intermediate point coordinate is vertical to a preset coordinate axis;
and determining the three-dimensional columnar image after the plane expansion processing as the plane rectangular image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring size information of the planar rectangular image;
if the size information is larger than the preset terminal interface size, performing size adjustment processing on the planar rectangular image to enable the first broadside size of the planar rectangular image to be equal to the second broadside size in the terminal interface size;
and responding to the image display triggering operation of the user aiming at the planar rectangular image, and performing rolling rendering on the planar rectangular image after the size adjustment processing to obtain the rendered planar rectangular image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and displaying a target image area in the rendered planar rectangular image in response to an image movement trigger event of the user for the rendered planar rectangular image.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a bent target monitoring image, and the target monitoring image is an image acquired by the front-end camera;
responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting the coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in an equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
obtaining and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image;
and displaying the rendered plane rectangular image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring at least one monitoring image through the front-end camera to obtain a thumbnail image corresponding to the at least one monitoring image;
displaying a thumbnail image corresponding to the at least one monitoring image;
determining a target monitoring image in the at least one monitoring image in response to a user selected trigger event for the thumbnail image;
based on a preset three-dimensional columnar display model, bending the target monitoring image to obtain a three-dimensional columnar image;
and displaying the three-dimensional columnar image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
responding to an image expansion triggering event of a user aiming at the three-dimensional columnar image, and determining the coordinates of a target point to be split in the three-dimensional columnar image; the target point coordinate is one of the coordinates of the end points of the two lines on the split line corresponding to the image unfolding position;
determining a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the coordinates of the target point, wherein the unfolding plane coordinate point is a preset unfolding plane coordinate point;
determining the expansion position coordinates of the three-dimensional columnar image according to the coordinate point distance and the preset image expansion times; the image expansion times are greater than or equal to two;
and carrying out plane expansion processing on the three-dimensional columnar image based on the expansion position coordinates to obtain a plane rectangular image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
responding to an image expansion triggering event of the three-dimensional columnar image by the user, and acquiring an image expansion position determined by the user;
if the image unfolding position is a default position, determining the coordinates of the line end points on a preset splitting line in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image;
and if the image unfolding position is a calibration position, determining the coordinates of the line end points on a calibration splitting line determined by a user in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the coordinates of the target point as the three-dimensional columnar coordinate point;
determining a cylindrical circular plane where the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point are located;
and calculating the distance value between the three-dimensional columnar coordinate point and the coordinate point of the expansion plane based on the columnar circular plane to obtain the coordinate point distance.
In one embodiment, the computer program when executed by the processor further performs the steps of:
based on preset image expansion times, carrying out equal-proportion splitting processing on the coordinate point distance to obtain at least one split point coordinate;
determining the coordinates of the at least one splitting point as first unfolding position coordinates of the target point;
acquiring a second unfolding position coordinate of each intermediate point coordinate based on the first unfolding position coordinate of the target point coordinate; the second unfolding position coordinate is a splitting point coordinate corresponding to the intermediate point coordinate;
and determining the first unfolding position coordinate and the second unfolding position coordinate as the unfolding position coordinate of the three-dimensional columnar image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a middle point coordinate corresponding to a target point coordinate to be split in the three-dimensional columnar image;
moving the target coordinate point based on the first unfolding position coordinate, and sequentially moving the intermediate point coordinate based on the second unfolding position coordinate to perform plane unfolding processing on the three-dimensional columnar image until the target coordinate point is overlapped with the unfolding plane coordinate point, wherein a straight line formed by connecting the target coordinate point with the intermediate point coordinate is vertical to a preset coordinate axis;
and determining the three-dimensional columnar image after the plane expansion processing as the plane rectangular image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring size information of the planar rectangular image;
if the size information is larger than the preset terminal interface size, performing size adjustment processing on the planar rectangular image to enable the first broadside size of the planar rectangular image to be equal to the second broadside size in the terminal interface size;
and responding to the image display triggering operation of the user aiming at the planar rectangular image, and performing rolling rendering on the planar rectangular image after the size adjustment processing to obtain the rendered planar rectangular image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and displaying a target image area in the rendered planar rectangular image in response to an image movement trigger event of the user for the rendered planar rectangular image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A monitoring image display method for an intelligent traffic system is characterized in that the intelligent traffic system comprises a front-end camera and a monitoring terminal with a screen, the monitoring terminal is connected with the front-end camera in a wireless connection mode, the monitoring image display method is applied to the monitoring terminal, and the monitoring image display method comprises the following steps:
acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a bent target monitoring image, and the target monitoring image is an image acquired by the front-end camera;
responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting the coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in an equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
obtaining and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image;
and displaying the rendered plane rectangular image.
2. The monitoring image displaying method for the intelligent transportation system according to claim 1, wherein the step of obtaining and displaying the three-dimensional cylindrical image to be processed comprises:
acquiring at least one monitoring image through the front-end camera to obtain a thumbnail image corresponding to the at least one monitoring image;
displaying a thumbnail image corresponding to the at least one monitoring image;
determining a target monitoring image in the at least one monitoring image in response to a user selected trigger event for the thumbnail image;
based on a preset three-dimensional columnar display model, bending the target monitoring image to obtain a three-dimensional columnar image;
and displaying the three-dimensional columnar image.
3. The method as claimed in claim 1, wherein the step of obtaining an image expansion position determined by a user in response to an image expansion trigger event of the three-dimensional cylindrical image by the user, obtaining a three-dimensional cylindrical coordinate point on the image expansion position, splitting a coordinate point distance between the three-dimensional cylindrical coordinate point and a preset expansion plane coordinate point in an equal proportion to obtain an expansion position coordinate, and performing plane expansion processing on the three-dimensional cylindrical image through the expansion position coordinate to obtain a plane rectangular image includes:
responding to an image expansion triggering event of a user aiming at the three-dimensional columnar image, and determining the coordinates of a target point to be split in the three-dimensional columnar image; the target point coordinate is one of two endpoint coordinates on a splitting line corresponding to the image unfolding position;
determining a coordinate point distance between the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point according to the coordinates of the target point, wherein the unfolding plane coordinate point is a preset unfolding plane coordinate point;
determining the expansion position coordinates of the three-dimensional columnar image according to the coordinate point distance and the preset image expansion times; the image expansion times are greater than or equal to two;
and carrying out plane expansion processing on the three-dimensional columnar image based on the expansion position coordinates to obtain a plane rectangular image.
4. The monitored image presentation method for an intelligent transportation system according to claim 3, wherein the step of determining coordinates of target points to be split in the three-dimensional bar image in response to an image expansion trigger event of the user for the three-dimensional bar image comprises:
responding to an image expansion triggering event of the three-dimensional columnar image by the user, and acquiring an image expansion position determined by the user;
if the image unfolding position is a default position, determining the coordinates of the line end points on a preset splitting line in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image;
and if the image unfolding position is a calibration position, determining the coordinates of the line end points on a calibration splitting line determined by a user in the three-dimensional columnar image as the coordinates of target points to be split in the three-dimensional columnar image.
5. The monitored image displaying method for an intelligent transportation system according to claim 3, wherein said step of determining a coordinate point distance between said three-dimensional cylindrical coordinate point and said unfolding plane coordinate point according to said target point coordinates comprises:
determining the coordinates of the target point as the three-dimensional columnar coordinate point;
determining a cylindrical circular plane where the three-dimensional cylindrical coordinate point and the unfolding plane coordinate point are located;
and calculating the distance value between the three-dimensional columnar coordinate point and the coordinate point of the expansion plane based on the columnar circular plane to obtain the coordinate point distance.
6. The monitored image displaying method for an intelligent transportation system according to claim 3, wherein the three-dimensional bar image includes a preset number of coordinates of intermediate points, and the step of determining the coordinates of the expansion position of the three-dimensional bar image according to the coordinate point distance and the preset number of times of image expansion comprises:
based on preset image expansion times, carrying out equal-proportion splitting processing on the coordinate point distance to obtain at least one split point coordinate;
determining the coordinates of the at least one splitting point as first unfolding position coordinates of the target point;
acquiring a second unfolding position coordinate of each intermediate point coordinate based on the first unfolding position coordinate of the target point coordinate; the second unfolding position coordinate is a splitting point coordinate corresponding to the intermediate point coordinate;
and determining the first unfolding position coordinate and the second unfolding position coordinate as the unfolding position coordinate of the three-dimensional columnar image.
7. The monitored image display method for an intelligent transportation system according to claim 3, wherein the coordinates of the expansion position include a first coordinate of the expansion position and a second coordinate of the expansion position, and the step of performing the planar expansion processing on the three-dimensional columnar image based on the coordinates of the expansion position to obtain a planar rectangular image includes:
determining a middle point coordinate corresponding to a target point coordinate to be split in the three-dimensional columnar image;
moving the target coordinate point based on the first unfolding position coordinate, and sequentially moving the intermediate point coordinate based on the second unfolding position coordinate to perform plane unfolding processing on the three-dimensional columnar image until the target coordinate point is overlapped with the unfolding plane coordinate point, wherein a straight line formed by connecting the target coordinate point with the intermediate point coordinate is vertical to a preset coordinate axis;
and determining the three-dimensional columnar image after the plane expansion processing as the plane rectangular image.
8. The method as claimed in claim 1, wherein the step of obtaining and performing scroll rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image comprises:
acquiring size information of the planar rectangular image;
if the size information is larger than the preset terminal interface size, performing size adjustment processing on the planar rectangular image to enable the first broadside size of the planar rectangular image to be equal to the second broadside size in the terminal interface size;
and responding to the image display triggering operation of the user aiming at the planar rectangular image, and performing rolling rendering on the planar rectangular image after the size adjustment processing to obtain the rendered planar rectangular image.
9. The monitoring image displaying method for the intelligent transportation system according to claim 1, wherein the step of displaying the rendered planar rectangular image comprises:
and displaying a target image area in the rendered planar rectangular image in response to an image movement trigger event of the user for the rendered planar rectangular image.
10. The utility model provides a monitor image display device for wisdom traffic system, a serial communication port, wisdom traffic system includes the front end camera and has the monitor terminal of screen, monitor terminal passes through wireless connection's mode and connects the front end camera, monitor image display device set up in monitor terminal, monitor image display device includes:
the image acquisition module is used for acquiring and displaying a three-dimensional columnar image to be processed, wherein the three-dimensional columnar image is a bent target monitoring image, and the target monitoring image is an image acquired by the front-end camera;
the image unfolding module is used for responding to an image unfolding trigger event of a user for the three-dimensional columnar image, acquiring an image unfolding position determined by the user, after obtaining a three-dimensional columnar coordinate point on the image unfolding position, splitting the coordinate point distance between the three-dimensional columnar coordinate point and a preset unfolding plane coordinate point in equal proportion to obtain an unfolding position coordinate, and carrying out plane unfolding processing on the three-dimensional columnar image through the unfolding position coordinate to obtain a plane rectangular image; the unfolding plane coordinate point is determined according to the length, the width and the height of the three-dimensional columnar image and is a coordinate point which is positioned on a circular surface with the three-dimensional columnar coordinate point;
the image rendering module is used for acquiring and performing rolling rendering on the planar rectangular image according to the size information of the planar rectangular image to obtain a rendered planar rectangular image;
and the image display module is used for displaying the rendered planar rectangular image.
CN202110134312.1A 2021-02-01 2021-02-01 Monitoring image display method and device Active CN112446823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110134312.1A CN112446823B (en) 2021-02-01 2021-02-01 Monitoring image display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110134312.1A CN112446823B (en) 2021-02-01 2021-02-01 Monitoring image display method and device

Publications (2)

Publication Number Publication Date
CN112446823A CN112446823A (en) 2021-03-05
CN112446823B true CN112446823B (en) 2021-04-27

Family

ID=74739517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110134312.1A Active CN112446823B (en) 2021-02-01 2021-02-01 Monitoring image display method and device

Country Status (1)

Country Link
CN (1) CN112446823B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5109803B2 (en) * 2007-06-06 2012-12-26 ソニー株式会社 Image processing apparatus, image processing method, and image processing program
JP5143856B2 (en) * 2010-04-16 2013-02-13 株式会社ソニー・コンピュータエンタテインメント 3D image display device and 3D image display method
CN104618688B (en) * 2015-01-19 2017-09-29 荣科科技股份有限公司 A kind of visual control means of defence
US10252417B2 (en) * 2016-03-02 2019-04-09 Canon Kabushiki Kaisha Information processing apparatus, method of controlling information processing apparatus, and storage medium
CN109816587B (en) * 2017-11-20 2021-04-16 杭州海康威视数字技术股份有限公司 Fisheye image processing method and device and electronic equipment
US11183279B2 (en) * 2018-10-25 2021-11-23 Topcon Healthcare Solutions, Inc. Method and apparatus for a treatment timeline user interface
CN110688495B (en) * 2019-12-09 2020-04-24 武汉中科通达高新技术股份有限公司 Method and device for constructing knowledge graph model of event information and storage medium
CN111489295B (en) * 2020-06-29 2020-11-17 平安国际智慧城市科技股份有限公司 Image processing method, electronic device, and storage medium
CN112115804A (en) * 2020-08-26 2020-12-22 北京博睿维讯科技有限公司 Key area monitoring video control method and system, intelligent terminal and storage medium
CN111813290B (en) * 2020-09-09 2020-12-01 武汉中科通达高新技术股份有限公司 Data processing method and device and electronic equipment
CN112288649A (en) * 2020-10-27 2021-01-29 长安大学 Image correction method and device for cylindrical object perspective imaging distortion

Also Published As

Publication number Publication date
CN112446823A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
CN108347657B (en) Method and device for displaying bullet screen information
EP4050305A1 (en) Visual positioning method and device
US20220328019A1 (en) Display terminal adjustment method and display terminal
CN103914876A (en) Method and apparatus for displaying video on 3D map
US20210174599A1 (en) Mixed reality system, program, mobile terminal device, and method
JP6686547B2 (en) Image processing system, program, image processing method
CN104461436A (en) Displaying method of multiple terminals based on different resolution ratios
CN116109765A (en) Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium
WO2022237116A1 (en) Image processing method and apparatus
CN112634366B (en) Method for generating position information, related device and computer program product
CN112446823B (en) Monitoring image display method and device
US20230089845A1 (en) Visual Localization Method and Apparatus
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
JP2017182681A (en) Image processing system, information processing device, and program
CN111858987B (en) Problem viewing method of CAD image, electronic equipment and related products
CN111798573B (en) Electronic fence boundary position determination method and device and VR equipment
CN112539752A (en) Indoor positioning method and indoor positioning device
CN114518859A (en) Display control method, display control device, electronic equipment and storage medium
CN106990932A (en) Image display method and device
CN112788425A (en) Dynamic area display method, device, equipment and computer readable storage medium
US8755819B1 (en) Device location determination using images
US9449364B2 (en) Information processing apparatus, information processing method, and program to recognize circumstances of a subject from a moving image
CN112465692A (en) Image processing method, device, equipment and storage medium
CN113721818B (en) Image processing method, device, equipment and computer readable storage medium
CN112862976B (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant