CN109873980B - Video monitoring method and device and terminal equipment - Google Patents

Video monitoring method and device and terminal equipment Download PDF

Info

Publication number
CN109873980B
CN109873980B CN201910069108.9A CN201910069108A CN109873980B CN 109873980 B CN109873980 B CN 109873980B CN 201910069108 A CN201910069108 A CN 201910069108A CN 109873980 B CN109873980 B CN 109873980B
Authority
CN
China
Prior art keywords
display
display area
area
code stream
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910069108.9A
Other languages
Chinese (zh)
Other versions
CN109873980A (en
Inventor
吴汉俊
肖婷
韦发林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jingyang Information Technology Co ltd
Original Assignee
Shenzhen Jingyang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jingyang Information Technology Co ltd filed Critical Shenzhen Jingyang Information Technology Co ltd
Priority to CN201910069108.9A priority Critical patent/CN109873980B/en
Publication of CN109873980A publication Critical patent/CN109873980A/en
Application granted granted Critical
Publication of CN109873980B publication Critical patent/CN109873980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention is suitable for the technical field of monitoring, and provides a video monitoring method, a video monitoring device, terminal equipment and a computer readable storage medium, wherein the video monitoring method comprises the following steps: if the display screen is in a multi-display-area segmentation mode, segmenting a display area of the display screen into a global display area and at least one local display area according to the multi-display-area segmentation mode; displaying a global video picture monitored by a camera on the global display area; and displaying a local video picture monitored by the camera on the local display area, wherein the local video picture is determined according to the position of a rectangular area arranged in the global display area. By the method, a plurality of objects which need to be focused by the user can be focused and monitored by one camera, and equipment cost is greatly reduced.

Description

Video monitoring method and device and terminal equipment
Technical Field
The invention belongs to the technical field of monitoring, and particularly relates to a video monitoring method, a video monitoring device, terminal equipment and a computer readable storage medium.
Background
In real life, the application of video monitoring is very wide. In general, a video surveillance system includes a video acquisition system, which typically includes a camera to enable acquisition of video image signals. For example, in a supermarket, the monitoring range of the camera a covers the whole supermarket, in order to focus on a jewelry counter and a cigarette and wine counter in the supermarket, the camera B needs to be arranged to monitor the jewelry counter, and the camera C needs to monitor the cigarette and wine counter, that is, in one scene, if a plurality of objects needing to focus on are present, a plurality of cameras need to be used for monitoring, so that the equipment cost is high.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video monitoring method, an apparatus, a terminal device, and a computer-readable storage medium, so as to solve the problem in the prior art that, in a scene, if there are a plurality of objects that need to be focused, a plurality of cameras need to be used for monitoring, which results in a large device cost.
A first aspect of an embodiment of the present invention provides a video monitoring method, including:
if the display screen is in a multi-display-area segmentation mode, segmenting a display area of the display screen into a global display area and at least one local display area according to the multi-display-area segmentation mode;
displaying a global video picture monitored by a camera on the global display area;
and displaying a local video picture monitored by the camera on the local display area, wherein the local video picture is determined according to the position of a rectangular area arranged in the global display area.
A second aspect of an embodiment of the present invention provides a video monitoring apparatus, including:
the display device comprises a dividing unit, a display unit and a display unit, wherein the dividing unit is used for dividing a display area of a display screen into a global display area and at least one local display area according to a multi-display-area dividing mode if the display screen is in the multi-display-area dividing mode;
the global video picture display unit is used for displaying a global video picture monitored by a camera on the global display area;
and the local video picture display unit is used for displaying the local video picture monitored by the camera on the local display area, and the local video picture is determined according to the position of a rectangular area arranged in the global display area.
A third aspect of an embodiment of the present invention provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the video surveillance method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the video surveillance method as described.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: if the display screen is in a multi-display-area segmentation mode, segmenting a display area of the display screen into a global display area and at least one local display area according to the multi-display-area segmentation mode, displaying a global video picture monitored by a camera on the global display area, and displaying a local video picture monitored by the camera on the local display area, wherein the local video picture is determined according to the position of a rectangular area arranged in the global display area. The display area of the display screen is divided into a global display area and at least one local display area according to the multi-display-area dividing mode, a global video picture monitored by the camera is displayed on the global display area, and the local video picture monitored by the camera is displayed on the local display area, so that the purpose that a user needs to focus and monitor a plurality of objects which need to be focused by the user is achieved by using one camera, and the equipment cost is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a video monitoring method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a multi-display area division mode according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples. It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the mobile terminals described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
In the discussion that follows, a mobile terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the mobile terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The mobile terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the mobile terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
The first embodiment is as follows:
fig. 1 shows a schematic flow chart of a video monitoring method provided in an embodiment of the present application, which is detailed as follows:
step S11, if the display screen is in the multi-display area division mode, dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display area division mode.
Optionally, the step S11 specifically includes: if the display screen is in the multi-display-area dividing mode, according to a number setting instruction of local display areas, determining an equal number of the display areas, wherein the equal number of the display areas is obtained by adding one calculated number to the number of the local display areas corresponding to the number setting instruction, equally dividing the display area of the display screen into a plurality of display sub-areas, the number of the display sub-areas is equal to the equal number of the display areas, determining one of the display sub-areas of the display screen as a global display area, and determining other display sub-areas except the global display area as the local display areas.
For example, as shown in fig. 2, in the multi-display area division mode, if it is determined that the number of the local display areas corresponding to the number setting command is three and the number of the display areas equal to four according to the number setting command of the local display areas, it is determined that the display area of the display screen is equally divided into four display sub-areas according to the multi-display area division mode, and it is determined that the display sub-area on the upper left of the display screen is a global display area, and the upper right, lower left, and lower right of the display screen are sequentially a first local display area, a second local display area, and a third local display area, respectively.
Optionally, after the step S11, the method includes: receiving a display area setting instruction, wherein the display area setting instruction comprises: and resetting the positions and the sizes of the global display area and the local display area, and updating the positions and the sizes of the global display area and the local display area according to the display area setting instruction.
Optionally, before the step S11, the method includes: displaying a segmentation mode selection interface, and receiving a segmentation mode selection instruction, wherein the segmentation mode selection instruction comprises: the method comprises the steps of selecting a multi-display area division mode, selecting a global display area division mode, displaying a setting dialog box of the number of local display areas if the multi-display area division mode is received, receiving a setting instruction of the number of local display areas sent by a user in the setting dialog box of the number of local display areas, wherein the number of local display areas is a positive integer equal to or larger than one, and the global display area division mode corresponding to the global display area division mode selection instruction comprises the following steps: a mode in which the display area of the display screen is entirely used as a global display area, or a mode in which a part of the display area of the display screen is used as a global display area.
Because the dividing mode selection instruction comprises a multi-display area dividing mode selection instruction and a global display area dividing mode selection instruction, the requirements of users on different numbers of objects needing important attention can be met.
And step S12, displaying the global video picture monitored by the camera on the global display area.
Optionally, the step S12 specifically includes: and receiving identification information of a camera, and if the identification information of the camera is consistent with preset camera identification information, displaying a global video picture monitored by the camera on the global display area.
And step S13, displaying a local video picture monitored by the camera on the local display area, where the local video picture is determined according to a position of a rectangular area set in the global display area.
Optionally, after the step S11, the method includes: sending code stream information to a camera, wherein the code stream information comprises main code stream information and auxiliary code stream information, the main code stream information comprises information of a code stream corresponding to the global display area, and the auxiliary code stream information comprises information of a code stream corresponding to the local display area; receiving a main code stream and an auxiliary code stream which are sent by the camera according to the code stream information; correspondingly, the step S12 includes: displaying a global video picture monitored by a camera on the global display area according to the main code stream; the step S13 includes: and displaying a local video picture monitored by the camera on the local display area according to the auxiliary code stream and the position of the rectangular area arranged in the global display area.
For example, if the display area of the display screen is in a multi-display-area division mode in which the display area of the display screen is divided into one global display area and three local display areas, the codestream information includes: the global display area comprises a global display area, and a global display area, wherein the global display area comprises a global display area, and the global display area comprises a global display area, and a global display area. Taking a first auxiliary code stream as an example, receiving the first auxiliary code stream sent by the camera according to the first auxiliary code stream information, displaying a local video picture monitored by the camera on a local display area corresponding to the first auxiliary code stream according to the first auxiliary code stream and the position of a rectangular area corresponding to the first auxiliary code stream, and analogizing the processing of a second auxiliary code stream and a third auxiliary code stream in sequence.
Optionally, after the step S11, the method includes: determining the area of a rectangular region in the global display region according to auxiliary code stream information and a preset rectangular area rule, wherein the auxiliary code stream information comprises auxiliary code stream resolution, and the preset rectangular area rule is as follows: the quotient of the area of the rectangular area divided by the area of the global display area is equal to the quotient of the resolution of the secondary code stream divided by the resolution of the camera; and determining the position of the rectangular region in the global display region according to a preset length-width ratio, the area of the rectangular region and any point of the rectangular region.
Wherein the preset aspect ratio is a preset aspect ratio of the rectangular region.
Specifically, the resolution of a camera is obtained, and the area of a rectangular region in the global display region is determined according to the resolution of the camera, auxiliary stream information and a preset rectangular area rule, where the auxiliary stream information includes the auxiliary stream resolution, and the preset rectangular region area rule is as follows: the quotient of the area of the rectangular area divided by the area of the global display area is equal to the quotient of the resolution of the secondary code stream divided by the resolution of the camera; determining the position of a rectangular region in the global display region according to a preset length-width ratio, the area of the rectangular region and any one point of the rectangular region, or according to the area of the rectangular region and any two vertexes of the rectangular region, wherein the rectangular region has a highlight mark, and the highlight mark comprises: the rectangular area boundary is highlighted in color.
The length and the width of the rectangle can be determined according to the preset length-width ratio and the area of the rectangular area, namely the shape of the rectangular area can be determined, the position of the rectangular area in the global display area can be determined by combining the coordinate of any point of the rectangular area and the position relation of the point and the rectangular area, or the length and the width of the rectangle can be determined according to the area of the rectangular area and any two vertexes of the rectangular area, namely the shape of the rectangular area can be determined, the position of the rectangular area in the global display area can be determined by combining the coordinates of the two vertexes, the accuracy of the position of the rectangular area is improved, and the focusing monitoring accuracy of objects which need to be focused by a user is indirectly improved.
Optionally, if there are multiple local display areas on the display screen, the multiple local display areas correspond to multiple rectangular areas, overlapping areas may exist between the rectangular areas, and if there is an overlapping area between the rectangular areas, the local video picture corresponding to the rectangular area where the overlapping area exists includes a video picture corresponding to the overlapping area.
For example, as shown in fig. 2, an overlap region exists between a first rectangular region and a second rectangular region, and if the first rectangular region corresponds to a first partial display region and the second rectangular region corresponds to a second partial display region, both a partial video picture displayed in the first partial display region and a partial video picture displayed in the second partial display region include video pictures corresponding to the overlap region.
Optionally, the video monitoring method further includes: and if a right mouse click command is detected in the rectangular region, displaying an auxiliary code stream resolution option, and setting the auxiliary code stream resolution corresponding to the rectangular region according to the received auxiliary code stream resolution selection command.
For example, if a right mouse click command is detected in the rectangular area, a secondary stream resolution option is displayed, where the secondary stream resolution option includes: 640 × 360 secondary stream resolution, 960 × 540 secondary stream resolution, and if the received secondary stream resolution selection instruction selects the secondary stream resolution of 640 × 360, setting the secondary stream resolution corresponding to the rectangular region to be 640 × 360.
Optionally, the video monitoring method further includes: and if a rectangular area position changing instruction is received, the rectangular area position changing instruction comprises a sliding track instruction and a coordinate value setting instruction. If a sliding track instruction is received, judging whether the starting point coordinate of the sliding track is on the rectangular area according to the sliding track instruction, and if the starting point coordinate of the sliding track is on the rectangular area, re-determining the position of the rectangular area in the global display area according to the starting point coordinate and the end point coordinate of the sliding track; and if a coordinate value setting instruction is received, re-determining the position of the rectangular area in the global display area according to the coordinate value setting instruction.
Optionally, after the step S13, the method includes: if a video setting instruction is received, setting video parameter information of a local display area corresponding to the video setting instruction according to the video setting instruction, wherein the video parameter information comprises video time information; and generating the video of the local video picture displayed in the local display area according to the video parameter information.
Optionally, the video setting instruction includes a setting instruction of video parameter information of a single local display area, and a setting instruction of video parameter information of at least two local display areas.
For example, assuming that a first video recording setting instruction including a setting instruction of video recording parameter information of a first local display area and a second video recording setting instruction are received, the first video setting instruction comprises a setting instruction of video parameter information of the second partial display area, setting video parameter information of a first local display area according to the first video setting instruction, wherein the video parameter information of the first local display area comprises video time information of the first local display area and an auxiliary code stream name corresponding to the first local display area, and setting video parameter information of a second local display area according to the second video setting instruction, wherein the video parameter information of the second local display area comprises video time information of the second local display area and an auxiliary code stream name corresponding to the second local display area.
The video recording parameter information of the local display areas corresponding to the video recording setting instruction can be set according to the video recording setting instruction, and then the video recordings of the local video pictures displayed by the local display areas are generated according to the video recording parameter information, and the video recordings of the local video pictures displayed by the local display areas are mutually independent, so that the video recording of a plurality of objects which need to be focused by a user by using one camera is realized, and the equipment cost of the video recording is greatly reduced.
Optionally, after the step S13, the method includes: judging whether an alarm triggering event occurs according to the video frame corresponding to the auxiliary code stream and a preset video frame analysis rule; if an alarm triggering event occurs, generating alarm event occurrence information; displaying the alarm event occurrence information on the local display area.
Specifically, whether an alarm triggering event occurs is judged according to a video frame corresponding to the auxiliary code stream and a preset video frame analysis rule. The preset video frame analysis rule specifically comprises: and acquiring feature difference data between adjacent video frames through the continuous video frames corresponding to the auxiliary code stream, calculating the occurrence probability of an alarm trigger event according to the feature difference data, wherein the occurrence probability of the alarm trigger event represents the probability that the video content corresponding to the video frames is the alarm trigger event, and if the occurrence probability of the alarm trigger event is equal to or greater than the preset alarm trigger probability threshold, judging that the alarm trigger event occurs. Wherein the alarm triggering event comprises: a motion detection alarm trigger event, an article loss alarm trigger event. If an alarm triggering event occurs, sending a video recording instruction of a camera to the camera to generate alarm event occurrence information, and displaying the alarm event occurrence information on the local display area, wherein the display form of the alarm event occurrence information comprises icons, characters and sound.
Optionally, after generating alarm event occurrence information if an alarm triggering event occurs, the method includes: and sending alarm event processing information to alarm linkage equipment according to the alarm event occurrence information and a preset alarm event processing rule.
Specifically, if the alarm event occurrence information includes the occurrence time of an alarm event, the occurrence location of the alarm event, and the alarm event type, after the alarm event occurrence information is generated if an alarm trigger event occurs, the alarm event occurrence information is determined according to the alarm event occurrence information, the occurrence location of the alarm event, and the alarm event type, the alarm level of the alarm event is determined according to the occurrence time, the occurrence location, the alarm event type, and a preset alarm event processing rule, the alarm event processing information and the alarm linkage device corresponding to the alarm level are determined, and the alarm event processing information is sent to the corresponding alarm linkage device, where the preset alarm event processing rule includes: determining the alarm level of the alarm event according to the occurrence time of the alarm event, the occurrence place of the alarm event and the type of the alarm event, and determining alarm event processing information and alarm linkage equipment corresponding to the alarm level, wherein the alarm linkage equipment comprises: an illumination control, an alarm, the illumination control and the alarm corresponding to different alarm levels.
The alarm event processing information can be sent to the alarm linkage equipment according to the alarm event occurrence information and the preset alarm event processing rule, so that the alarm timeliness is improved, and the damage degree of the alarm triggering event is effectively reduced.
Optionally, the video monitoring method further includes: and if the display screen is not in the multi-display-area segmentation mode, displaying a global video picture monitored by the camera on the display screen.
Specifically, if the video is not in the multi-display-area segmentation mode, sending main code stream information to a camera, receiving the main code stream sent by the camera according to the main code stream information, and displaying a global video picture monitored by the camera on the display screen according to the main code stream.
Optionally, if a segmentation mode switching instruction is received, detecting a current segmentation mode, and if the current segmentation mode is in a multi-display-area segmentation mode, exiting the multi-display-area segmentation mode, and displaying a global video picture monitored by a camera on the display screen; and if the display screen is not in the multi-display-area segmentation mode, stopping displaying the global video picture monitored by the camera on the display screen, and entering the multi-display-area segmentation mode.
In the embodiment of the invention, if the display screen is in a multi-display-area division mode, the display area of the display screen is divided into a global display area and at least one local display area according to the multi-display-area division mode, a global video picture monitored by a camera is displayed on the global display area, a local video picture monitored by the camera is displayed on the local display area, and the local video picture is determined according to the position of a rectangular area arranged in the global display area. The display area of the display screen is divided into a global display area and at least one local display area according to the multi-display-area dividing mode, a global video picture monitored by the camera is displayed on the global display area, and the local video picture monitored by the camera is displayed on the local display area, so that the purpose that a user needs to focus and monitor a plurality of objects which need to be focused by the user is achieved by using one camera, and the equipment cost is greatly reduced.
Example two:
corresponding to the above embodiments, fig. 3 shows a schematic structural diagram of a video monitoring apparatus provided in an embodiment of the present application, and for convenience of description, only the portions related to the embodiment of the present application are shown.
The video monitoring device includes: a dividing unit 31, a global video picture display unit 32, and a local video picture display unit 33.
The dividing unit 31 is configured to, if the display screen is in the multi-display-area dividing mode, divide the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode.
Optionally, the dividing unit 31 is specifically configured to: if the display screen is in the multi-display-area dividing mode, according to a number setting instruction of local display areas, determining an equal number of the display areas, wherein the equal number of the display areas is obtained by adding one calculated number to the number of the local display areas corresponding to the number setting instruction, equally dividing the display area of the display screen into a plurality of display sub-areas, the number of the display sub-areas is equal to the equal number of the display areas, determining one of the display sub-areas of the display screen as a global display area, and determining other display sub-areas except the global display area as the local display areas.
Optionally, the video monitoring apparatus further includes: and an updating unit. The update unit is configured to: after the dividing unit 31 executes the division into a global display area and at least one local display area according to the multi-display-area division mode if the display area is in the multi-display-area division mode, receiving a display area setting instruction, where the display area setting instruction includes: and resetting the positions and the sizes of the global display area and the local display area, and updating the positions and the sizes of the global display area and the local display area according to the display area setting instruction.
Optionally, the video monitoring apparatus further includes: a division mode selection unit. The segmentation mode selection unit is configured to: before the dividing unit 31 executes the division into the global display area and the at least one local display area according to the multi-display-area division mode if the display area of the display screen is in the multi-display-area division mode, displaying a division mode selection interface, and receiving a division mode selection instruction, where the division mode selection instruction includes: the method comprises the steps of selecting a multi-display area division mode, selecting a global display area division mode, displaying a setting dialog box of the number of local display areas if the multi-display area division mode is received, receiving a setting instruction of the number of local display areas sent by a user in the setting dialog box of the number of local display areas, wherein the number of local display areas is a positive integer equal to or larger than one, and the global display area division mode corresponding to the global display area division mode selection instruction comprises the following steps: a mode in which the display area of the display screen is entirely used as a global display area, or a mode in which a part of the display area of the display screen is used as a global display area.
Because the dividing mode selection instruction comprises a multi-display area dividing mode selection instruction and a global display area dividing mode selection instruction, the requirements of users on different numbers of objects needing important attention can be met.
And the global video picture display unit 32 is used for displaying the global video pictures monitored by the cameras on the global display area.
Optionally, the global video frame display unit 32 is specifically configured to: and receiving identification information of a camera, and if the identification information of the camera is consistent with preset camera identification information, displaying a global video picture monitored by the camera on the global display area.
A local video picture display unit 33, configured to display, on the local display area, a local video picture monitored by the camera, where the local video picture is determined according to a position of a rectangular area set in the global display area.
Optionally, the video monitoring apparatus further includes: and a code stream receiving unit. The code stream receiving unit is used for: after the dividing unit 31 executes the multi-display-area dividing mode, dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode, and then sending code stream information to the camera, wherein the code stream information comprises main code stream information and auxiliary code stream information, the main code stream information comprises information of a code stream corresponding to the global display area, and the auxiliary code stream information comprises information of a code stream corresponding to the local display area; receiving a main code stream and an auxiliary code stream which are sent by the camera according to the code stream information; correspondingly, the global video picture display unit 32 is configured to display a global video picture monitored by a camera on the global display area according to the main code stream; correspondingly, the local video picture display unit 33 is configured to display the local video picture monitored by the camera on the local display area according to the auxiliary code stream and the position of the rectangular area set in the global display area.
Optionally, the video monitoring apparatus further includes: a rectangular area position determination unit. The rectangular region position determining unit is configured to: after the dividing unit 31 executes the multi-display-area dividing mode, and divides the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode, determining the area of the rectangular area in the global display area according to the secondary code stream information and a preset rectangular area rule, where the secondary code stream information includes a secondary code stream resolution, and the preset rectangular area rule is: the quotient of the area of the rectangular area divided by the area of the global display area is equal to the quotient of the resolution of the secondary code stream divided by the resolution of the camera; and determining the position of the rectangular region in the global display region according to a preset length-width ratio, the area of the rectangular region and any point of the rectangular region.
Wherein the preset aspect ratio is a preset aspect ratio of the rectangular region.
The rectangular area position determining unit is specifically configured to: after the dividing unit 31 executes the multi-display-area dividing mode, dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode, acquiring the resolution of the camera, and determining the area of the rectangular area in the global display area according to the resolution of the camera, the secondary code stream information and a preset rectangular area rule, where the secondary code stream information includes the secondary code stream resolution, and the preset rectangular area rule is as follows: the quotient of the area of the rectangular area divided by the area of the global display area is equal to the quotient of the resolution of the secondary code stream divided by the resolution of the camera; determining the position of a rectangular region in the global display region according to a preset length-width ratio, the area of the rectangular region and any one point of the rectangular region, or according to the area of the rectangular region and any two vertexes of the rectangular region, wherein the rectangular region has a highlight mark, and the highlight mark comprises: the rectangular area boundary is highlighted in color.
The rectangular area position determining unit can determine the length and the width of a rectangle according to a preset length-width ratio and the area of the rectangular area, namely, the shape of the rectangular area can be determined, then the position of the rectangular area in the global display area can be determined by combining the coordinate of any point of the rectangular area and the position relation between the point and the rectangular area, or the length and the width of the rectangle can be determined according to the area of the rectangular area and any two vertexes of the rectangular area, namely, the shape of the rectangular area can be determined, and then the coordinates of the two vertexes are combined to determine the position of the rectangular area in the global display area, so that the accuracy of the position of the rectangular area is improved, and the focusing monitoring accuracy of objects which need to be focused by a user is indirectly improved.
Optionally, if there are multiple local display areas on the display screen, the multiple local display areas correspond to multiple rectangular areas, overlapping areas may exist between the rectangular areas, and if there is an overlapping area between the rectangular areas, the local video picture corresponding to the rectangular area where the overlapping area exists includes a video picture corresponding to the overlapping area.
Optionally, the video monitoring apparatus further includes: and a rectangular area position changing unit. The rectangular region position changing unit is configured to: and if a rectangular area position changing instruction is received, the rectangular area position changing instruction comprises a sliding track instruction and a coordinate value setting instruction. If a sliding track instruction is received, judging whether the starting point coordinate of the sliding track is on the rectangular area according to the sliding track instruction, and if the starting point coordinate of the sliding track is on the rectangular area, re-determining the position of the rectangular area in the global display area according to the starting point coordinate and the end point coordinate of the sliding track; and if a coordinate value setting instruction is received, re-determining the position of the rectangular area in the global display area according to the coordinate value setting instruction.
Optionally, the video monitoring apparatus further includes: a video recording unit. The video recording unit is used for: after the local video picture display unit 33 executes the local video picture monitored by the camera on the local display area, the local video picture is determined according to the position of the rectangular area arranged in the global display area, and if a video setting instruction is received, video parameter information of the local display area corresponding to the video setting instruction is set according to the video setting instruction, wherein the video parameter information comprises video time information; and generating the video of the local video picture displayed in the local display area according to the video parameter information.
Optionally, the video setting instruction includes a setting instruction of video parameter information of a single local display area, and a setting instruction of video parameter information of at least two local display areas.
The video recording unit can set the video recording parameter information of the plurality of local display areas corresponding to the video recording setting instruction according to the video recording setting instruction, and then generate the video recording of the local video pictures displayed by the plurality of local display areas according to the video recording parameter information, and the video recording of the local video pictures displayed by each local display area is mutually independent, so that the video recording of a plurality of objects which need to be focused by a user by using one camera is realized, and the equipment cost of the video recording is greatly reduced.
Optionally, the video monitoring apparatus further includes: an alarm event occurrence information generating unit. The alarm event occurrence information generating unit is used for: after the local video picture display unit 33 executes the displaying of the local video picture monitored by the camera on the local display area, and the local video picture is determined according to the position of the rectangular area arranged in the global display area, whether an alarm triggering event occurs is judged according to the video frame corresponding to the auxiliary code stream and a preset video frame analysis rule; if an alarm triggering event occurs, generating alarm event occurrence information; displaying the alarm event occurrence information on the local display area.
The alarm event occurrence information generating unit is specifically configured to: after the local video picture display unit 33 executes the displaying of the local video picture monitored by the camera on the local display area, and the local video picture is determined according to the position of the rectangular area arranged in the global display area, whether an alarm triggering event occurs is judged according to the video frame corresponding to the auxiliary code stream and a preset video frame analysis rule. The preset video frame analysis rule specifically comprises: and acquiring feature difference data between adjacent video frames through the continuous video frames corresponding to the auxiliary code stream, calculating the occurrence probability of an alarm trigger event according to the feature difference data, wherein the occurrence probability of the alarm trigger event represents the probability that the video content corresponding to the video frames is the alarm trigger event, and if the occurrence probability of the alarm trigger event is equal to or greater than the preset alarm trigger probability threshold, judging that the alarm trigger event occurs. Wherein the alarm triggering event comprises: a motion detection alarm trigger event, an article loss alarm trigger event. If an alarm triggering event occurs, sending a video recording instruction of a camera to the camera to generate alarm event occurrence information, and displaying the alarm event occurrence information on the local display area, wherein the display form of the alarm event occurrence information comprises icons, characters and sound.
Optionally, the video monitoring apparatus further includes: and an alarm linkage unit. The alarm linkage unit is used for: and after the alarm event generation information generation unit executes that if the alarm trigger event occurs, the alarm event generation information is generated, and then alarm event processing information is sent to alarm linkage equipment according to the alarm event generation information and a preset alarm event processing rule.
The alarm linkage unit is specifically configured to: if the alarm event occurrence information comprises the occurrence time of an alarm event, the occurrence place of the alarm event and the alarm event type, after the alarm event occurrence information is generated if an alarm trigger event occurs, determining the occurrence time of the alarm event, the occurrence place of the alarm event and the alarm event type according to the alarm event occurrence information, determining the alarm level of the alarm event according to the occurrence time, the occurrence place, the alarm event type and a preset alarm event processing rule, determining the alarm event processing information and the alarm linkage equipment corresponding to the alarm level, and sending the alarm event processing information to the corresponding alarm linkage equipment, wherein the preset alarm event processing rule comprises: determining the alarm level of the alarm event according to the occurrence time of the alarm event, the occurrence place of the alarm event and the type of the alarm event, and determining alarm event processing information and alarm linkage equipment corresponding to the alarm level, wherein the alarm linkage equipment comprises: an illumination control, an alarm, the illumination control and the alarm corresponding to different alarm levels.
The alarm event processing information can be sent to the alarm linkage equipment according to the alarm event occurrence information and the preset alarm event processing rule, so that the alarm timeliness is improved, and the damage degree of the alarm triggering event is effectively reduced.
Optionally, the video monitoring apparatus further includes: not in the multi-display area division mode unit. The unit not in multi-display area division mode is used for: and if the display screen is not in the multi-display-area segmentation mode, displaying a global video picture monitored by the camera on the display screen.
The unit not in the multi-display-area division mode is specifically configured to: if the video is not in the multi-display-area segmentation mode, sending main code stream information to a camera, receiving the main code stream sent by the camera according to the main code stream information, and displaying a global video picture monitored by the camera on the display screen according to the main code stream.
Optionally, the video monitoring apparatus further includes: a division mode switching unit. The division mode switching unit is configured to: if a segmentation mode switching instruction is received, detecting the current segmentation mode, and if the current segmentation mode is in a multi-display-area segmentation mode, exiting the multi-display-area segmentation mode, and displaying a global video picture monitored by a camera on the display screen; and if the display screen is not in the multi-display-area segmentation mode, stopping displaying the global video picture monitored by the camera on the display screen, and entering the multi-display-area segmentation mode.
In the embodiment of the invention, if the display screen is in a multi-display-area division mode, the display area of the display screen is divided into a global display area and at least one local display area according to the multi-display-area division mode, a global video picture monitored by a camera is displayed on the global display area, a local video picture monitored by the camera is displayed on the local display area, and the local video picture is determined according to the position of a rectangular area arranged in the global display area. The display area of the display screen is divided into a global display area and at least one local display area according to the multi-display-area dividing mode, a global video picture monitored by the camera is displayed on the global display area, and the local video picture monitored by the camera is displayed on the local display area, so that the purpose that a user needs to focus and monitor a plurality of objects which need to be focused by the user is achieved by using one camera, and the equipment cost is greatly reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example three:
fig. 4 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 4, the terminal device 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40. The processor 40, when executing the computer program 42, implements the steps of the various video surveillance method embodiments described above, such as the steps S11-S13 shown in fig. 1. Alternatively, the processor 40, when executing the computer program 42, implements the functions of the units in the device embodiments described above, such as the functions of the units 31 to 33 shown in fig. 3.
Illustratively, the computer program 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 42 in the terminal device 4. For example, the computer program 42 may be divided into a segmentation unit, a global video frame display unit, and a local video frame display unit, and each unit has the following specific functions:
and the dividing unit is used for dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode if the display screen is in the multi-display-area dividing mode.
And the global video picture display unit is used for displaying the global video picture monitored by the camera on the global display area.
And the local video picture display unit is used for displaying the local video picture monitored by the camera on the local display area, and the local video picture is determined according to the position of a rectangular area arranged in the global display area.
The terminal device 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of a terminal device 4 and does not constitute a limitation of terminal device 4 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. The memory 41 may also be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing the computer program and other programs and data required by the terminal device. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A video surveillance method, comprising:
if the display screen is in the multi-display-area division mode, dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display-area division mode, and if the display screen is in the multi-display-area division mode, dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display-area division mode specifically include: if the display screen is in the multi-display-area dividing mode, determining a display area equal number according to a number setting instruction of local display areas, wherein the display area equal number is obtained by adding one calculated number to the number of the local display areas corresponding to the number setting instruction, equally dividing the display area of the display screen into a plurality of display sub-areas, wherein the number of the display sub-areas is equal to the display area equal number, determining one of the display sub-areas of the display screen as a global display area, and determining other display sub-areas except the global display area as the local display areas;
sending code stream information to a camera, wherein the code stream information comprises main code stream information and auxiliary code stream information, the main code stream information comprises information of a code stream corresponding to the global display area, and the auxiliary code stream information comprises information of a code stream corresponding to the local display area;
receiving a main code stream and an auxiliary code stream which are sent by the camera according to the code stream information;
displaying a global video picture monitored by a camera on the global display area according to the main code stream;
displaying a local video picture monitored by the camera on the local display area according to the auxiliary code stream and the position of a rectangular area arranged in the global display area;
judging whether an alarm triggering event occurs according to a video frame corresponding to the auxiliary code stream and a preset video frame analysis rule, wherein the preset video frame analysis rule specifically comprises the following steps: acquiring feature difference data between adjacent video frames through the continuous video frames corresponding to the auxiliary code stream, calculating the occurrence probability of an alarm trigger event according to the feature difference data, wherein the occurrence probability of the alarm trigger event represents the probability that the video content corresponding to the video frames is the alarm trigger event, and if the occurrence probability of the alarm trigger event is equal to or greater than the preset alarm trigger probability threshold, judging that the alarm trigger event occurs;
if an alarm triggering event occurs, generating alarm event occurrence information;
displaying the alarm event occurrence information on the local display area.
2. The video surveillance method of claim 1, further comprising:
and if the display screen is not in the multi-display-area segmentation mode, displaying a global video picture monitored by the camera on the display screen.
3. The video surveillance method of claim 1, after said dividing the display area of the display screen into a global display area and at least one local display area according to the multi-display-area division mode, comprising:
determining the area of a rectangular region in the global display region according to auxiliary code stream information and a preset rectangular area rule, wherein the auxiliary code stream information comprises auxiliary code stream resolution, and the preset rectangular area rule is as follows: the quotient of the area of the rectangular area divided by the area of the local display area is equal to the quotient of the resolution of the secondary code stream divided by the resolution of the camera;
and determining the position of the rectangular region in the global display region according to a preset length-width ratio, the area of the rectangular region and any point of the rectangular region.
4. The video surveillance method of claim 1, after displaying the camera-monitored partial video picture on the partial display area, comprising:
if a video setting instruction is received, setting video parameter information of a local display area corresponding to the video setting instruction according to the video setting instruction, wherein the video parameter information comprises video time information;
and generating the video of the local video picture displayed in the local display area according to the video parameter information.
5. The video surveillance method of claim 1, after generating alarm event occurrence information if an alarm triggering event occurs, comprising:
and sending alarm event processing information to alarm linkage equipment according to the alarm event occurrence information and a preset alarm event processing rule.
6. A video monitoring apparatus, comprising:
a dividing unit, configured to, if the display device is in a multi-display-area dividing mode, divide the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode, where the dividing unit is specifically configured to, when the display device is executed in the multi-display-area dividing mode, divide the display area of the display screen into a global display area and at least one local display area according to the multi-display-area dividing mode: if the display screen is in the multi-display-area dividing mode, determining a display area equal number according to a number setting instruction of local display areas, wherein the display area equal number is obtained by adding one calculated number to the number of the local display areas corresponding to the number setting instruction, equally dividing the display area of the display screen into a plurality of display sub-areas, wherein the number of the display sub-areas is equal to the display area equal number, determining one of the display sub-areas of the display screen as a global display area, and determining other display sub-areas except the global display area as the local display areas;
a code stream receiving unit, configured to send code stream information to a camera, where the code stream information includes main code stream information and auxiliary code stream information, the main code stream information includes information of a code stream corresponding to the global display area, and the auxiliary code stream information includes information of a code stream corresponding to the local display area, and receive a main code stream and an auxiliary code stream sent by the camera according to the code stream information;
the global video picture display unit is used for displaying a global video picture monitored by a camera on the global display area according to the main code stream;
the local video picture display unit is used for displaying a local video picture monitored by the camera on the local display area according to the auxiliary code stream and the position of a rectangular area arranged in the global display area;
an alarm event occurrence information generating unit, configured to determine whether an alarm trigger event occurs according to a video frame corresponding to the auxiliary stream and a preset video frame analysis rule, where the preset video frame analysis rule specifically includes: acquiring feature difference data between adjacent video frames through the continuous video frames corresponding to the auxiliary code stream, calculating the occurrence probability of an alarm trigger event according to the feature difference data, wherein the occurrence probability of the alarm trigger event represents the probability that the video content corresponding to the video frames is the alarm trigger event, and if the occurrence probability of the alarm trigger event is equal to or greater than the preset alarm trigger probability threshold, judging that the alarm trigger event occurs;
if an alarm triggering event occurs, generating alarm event occurrence information;
displaying the alarm event occurrence information on the local display area.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201910069108.9A 2019-01-24 2019-01-24 Video monitoring method and device and terminal equipment Active CN109873980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910069108.9A CN109873980B (en) 2019-01-24 2019-01-24 Video monitoring method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910069108.9A CN109873980B (en) 2019-01-24 2019-01-24 Video monitoring method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN109873980A CN109873980A (en) 2019-06-11
CN109873980B true CN109873980B (en) 2020-08-21

Family

ID=66918120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910069108.9A Active CN109873980B (en) 2019-01-24 2019-01-24 Video monitoring method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN109873980B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866578B (en) * 2021-02-03 2023-04-07 四川新视创伟超高清科技有限公司 Global-to-local bidirectional visualization and target tracking system and method based on 8K video picture
CN112866651A (en) * 2021-02-03 2021-05-28 四川新视创伟超高清科技有限公司 Super-channel super-machine-position video processing or monitoring system and method based on 8K camera
CN113055604A (en) * 2021-04-01 2021-06-29 四川新视创伟超高清科技有限公司 Optimal visual angle video processing system and method based on 8K video signal and AI technology

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3585625B2 (en) * 1996-02-27 2004-11-04 シャープ株式会社 Image input device and image transmission device using the same
CN102300083A (en) * 2011-09-27 2011-12-28 杭州华三通信技术有限公司 Method and equipment for magnifying local region image
CN103595954A (en) * 2012-08-16 2014-02-19 北京中电华远科技有限公司 Method and system for multi-video-image fusion processing based on position information
CN107438154A (en) * 2016-05-25 2017-12-05 中国民用航空总局第二研究所 A kind of high-low-position linkage monitoring method and system based on panoramic video

Also Published As

Publication number Publication date
CN109873980A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
CN109064390B (en) Image processing method, image processing device and mobile terminal
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN108737739B (en) Preview picture acquisition method, preview picture acquisition device and electronic equipment
CN109873980B (en) Video monitoring method and device and terminal equipment
CN109739223B (en) Robot obstacle avoidance control method and device, terminal device and storage medium
CN109376645B (en) Face image data optimization method and device and terminal equipment
CN109215037B (en) Target image segmentation method and device and terminal equipment
CN109144370B (en) Screen capturing method, device, terminal and computer readable medium
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN109471626B (en) Page logic structure, page generation method, page data processing method and device
CN112068698A (en) Interaction method and device, electronic equipment and computer storage medium
CN109359582B (en) Information searching method, information searching device and mobile terminal
CN111190677A (en) Information display method, information display device and terminal equipment
CN110166696B (en) Photographing method, photographing device, terminal equipment and computer-readable storage medium
CN110248165B (en) Label display method, device, equipment and storage medium
CN109358927B (en) Application program display method and device and terminal equipment
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
WO2024066752A1 (en) Display control method and apparatus, head-mounted display device, and medium
CN109582269B (en) Physical splicing screen display method and device and terminal equipment
CN109444905B (en) Dynamic object detection method and device based on laser and terminal equipment
CN112055156A (en) Preview image updating method and device, mobile terminal and storage medium
CN108763491B (en) Picture processing method and device and terminal equipment
CN108776959B (en) Image processing method and device and terminal equipment
CN107862010B (en) Method and device for acquiring information of application system of Internet of things and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant