CN111681179A - Screen-side display method and device, computer equipment and computer-readable storage medium - Google Patents

Screen-side display method and device, computer equipment and computer-readable storage medium Download PDF

Info

Publication number
CN111681179A
CN111681179A CN202010442075.0A CN202010442075A CN111681179A CN 111681179 A CN111681179 A CN 111681179A CN 202010442075 A CN202010442075 A CN 202010442075A CN 111681179 A CN111681179 A CN 111681179A
Authority
CN
China
Prior art keywords
thread
video data
screen
processing
beautifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010442075.0A
Other languages
Chinese (zh)
Inventor
赵超杰
王正学
董亮
康玄烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010442075.0A priority Critical patent/CN111681179A/en
Publication of CN111681179A publication Critical patent/CN111681179A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a screen end display method, a screen end display device, computer equipment and a computer readable storage medium, wherein the screen end display method comprises the following steps: collecting video data; creating a first thread and a second thread; controlling the first thread to acquire a corresponding first configuration resource so as to operate the first thread to perform face recognition processing on the video data and output and display an image subjected to the face recognition processing on a Web end; and controlling the second thread to acquire a corresponding second configuration resource so as to operate the second thread to perform beautifying processing on the video data and output and display a beautifying image on a screen end. Through the application, the problem that face recognition and screen end display effects cannot be considered is solved.

Description

Screen-side display method and device, computer equipment and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a screen-side display method and apparatus, a computer device, and a computer-readable storage medium.
Background
The styles of beauty cameras and beauty mobile phones on the market are diversified at present, but the camera end can only meet the basic requirements of monitoring and face recognition, and the concept of beauty is not introduced. Building video intercom type products typically employ a large size display screen. Because the image effect of the web end is inconsistent with the display effect of the screen end, in order to realize a better face recognition function, the image effect of the web end is often emphasized, the display effect of the screen end is ignored gradually, however, the display effect of the screen end is often presented to a client for the first time, and therefore, on the premise of not influencing the face recognition function, the display effect of the screen end is greatly improved, and the problem which needs to be solved urgently at present is solved.
In the related art, one ISP supports both image effects of the web side after encoding and decoding and screen side display output of various different effects. Because the basic requirement is face recognition, the image style needs to be completely closed to a face recognition algorithm library, the display output effects of various screen ends are completely ignored, and the visual effect presented to a client is different along with the difference of the screen ends.
At present, an effective solution is not provided aiming at the problem that human face recognition and screen end display effects cannot be considered in the related technology.
Disclosure of Invention
The embodiment of the application provides a screen-side display method, a screen-side display device, computer equipment and a computer-readable storage medium, so as to at least solve the problem that face recognition and screen-side display effects cannot be considered in the related art.
In a first aspect, an embodiment of the present application provides a screen-side display method, which is applied to a monitoring system, where the monitoring system includes a Web side and a screen side, the Web side represents a background display interface of the monitoring system, and the screen side represents a front-end display interface of the monitoring system; the method comprises the following steps:
collecting video data;
creating a first thread and a second thread;
controlling the first thread to acquire a corresponding first configuration resource so as to operate the first thread to perform face recognition processing on the video data and output and display an image subjected to the face recognition processing on a Web end;
and controlling the second thread to acquire a corresponding second configuration resource so as to operate the second thread to perform beautifying processing on the video data and output and display a beautifying image on a screen end.
In some embodiments, the controlling the second thread to acquire the corresponding second configuration resource comprises:
and after the first thread calls the ISP chip to obtain the first configuration resource, controlling the second thread to call the ISP chip to obtain the second configuration resource.
In some of these embodiments, said capturing video data comprises:
controlling video acquisition equipment to acquire video data;
and transmitting the video data to a data buffer.
In some embodiments, the running the first thread to perform face recognition processing on the video data includes:
controlling the first thread to acquire the video data from the data buffer area and acquire an image processing algorithm from the video acquisition equipment;
processing the video data according to the image processing algorithm to obtain first processing data;
acquiring a face recognition algorithm from the first configuration resource;
and carrying out face recognition processing on the first processing data according to the face recognition algorithm.
In some embodiments, the running the second thread to beautify the video data includes:
controlling the second thread to obtain the video data from the data buffer;
and acquiring a beautifying algorithm from the second configuration resource, and performing beautifying processing on the video data according to the beautifying algorithm.
In some embodiments, the running the second thread to beautify the video data includes:
controlling the second thread to receive video data issued by the first thread;
and acquiring a beautifying algorithm from the second configuration resource, and performing beautifying processing on the video data according to the beautifying algorithm.
In some embodiments, the displaying the beauty image output on the screen end includes:
acquiring display parameters of a screen end;
and adjusting the parameters of the beautifying image according to the display parameters.
In a second aspect, an embodiment of the present application provides a screen-end display device, including:
the data acquisition module is used for acquiring video data;
the thread creating module is used for creating a first thread and a second thread;
the first processing module is used for controlling the first thread to acquire a corresponding first configuration resource so as to operate the first thread to perform face recognition processing on the video data and output and display an image subjected to the face recognition processing on a Web end;
and the second processing module is used for controlling the second thread to acquire a corresponding second configuration resource so as to operate the second thread to perform the beautifying processing on the video data and output and display a beautifying image on a screen end.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the screen-end display method according to the first aspect is implemented.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the screen-side display method according to the first aspect.
Compared with the related art, the screen end display method, the screen end display device, the computer equipment and the computer readable storage medium provided by the embodiment of the application acquire video data; creating a first thread and a second thread; controlling the first thread to acquire a corresponding first configuration resource so as to operate the first thread to perform face recognition processing on the video data and output and display an image subjected to the face recognition processing on a Web end; and controlling the second thread to acquire a corresponding second configuration resource so as to operate the second thread to perform beautifying processing on the video data and output and display a beautifying image at a screen end, thereby solving the problem that the effects of face recognition and screen end display cannot be considered at the same time.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a screen-side display method according to an embodiment of the present application;
fig. 2 is a flowchart of a face recognition process performed on video data in an embodiment of the present application;
fig. 3 is a first flowchart of a process of performing a beautifying process on video data according to an embodiment of the present application;
fig. 4 is a flowchart of a second process of performing a beautifying process on video data according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating outputting and displaying a beauty image on a screen in an embodiment of the present application;
FIG. 6 is a flowchart of a screen-side display method according to an embodiment of the present application;
FIG. 7 is a diagram illustrating image effects displayed on a screen side and a Web side in an embodiment of the present application;
fig. 8 is a block diagram of a display device at a screen end according to an embodiment of the present application;
fig. 9 is a schematic hardware structure diagram of a screen-side display device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The screen end display method, the screen end display device, the computer equipment and the computer readable storage medium can be applied to a building monitoring system, but are not limited to the application.
The embodiment provides a screen end display method, which is applied to a monitoring system, wherein the monitoring system comprises a Web end and a screen end, the Web end represents a background display interface of the monitoring system, and the screen end represents a front-end display interface of the monitoring system; fig. 1 is a flowchart of a screen end display method according to an embodiment of the present application, where the flowchart includes the following steps:
and step S110, collecting video data.
It is understood that the video data acquisition device may be used to acquire video data and store the acquired video data. Video data includes, but is not limited to, captured facial images and video.
Step S120, a first thread and a second thread are created.
Specifically, the number of sub-threads may be created by running the 2A program, and then the sub-thread creating function creates sub-threads according to the number of sub-threads. For example, when the number of sub-threads is two, the sub-thread creating function is executed to create the first thread and the second thread. The first thread and the second thread are controlled to respectively execute different tasks, so that the running efficiency of the process is improved.
Step S130, controlling the first thread to acquire a corresponding first configuration resource, so as to operate the first thread to perform face recognition processing on the video data, and output and display an image after the face recognition processing on the Web end.
It should be noted that the first configuration resource includes, but is not limited to, a run memory. The first thread is controlled to obtain a first configuration resource corresponding to the first thread so as to obtain an operation memory to operate the first thread, the video data is subjected to face recognition processing, and an image subjected to face recognition is coded, decoded and output to be displayed on a Web end.
Step S140, controlling the second thread to obtain the corresponding second configuration resource, so as to operate the second thread to perform the beauty processing on the video data, and output and display the beauty image on the screen.
The second configuration resource includes, but is not limited to, a run memory. And controlling the second thread to obtain a second configuration resource corresponding to the second thread so as to obtain an operating memory to operate the second thread, performing beautifying processing on the video data, and directly outputting and displaying the beautifying image on a screen end without performing coding and decoding processing on the beautifying image.
In an actual application scene, the screen end can represent a man-machine interaction display interface in a building access control monitoring system. The Web end can represent a terminal display interface in the building entrance guard monitoring system. For example, in a cell access monitoring system, when someone wants to enter a cell, the administrator of the monitoring system can see the face image of the person from the screen end, and see the face image after face recognition on the terminal display interface.
Through the steps S110 to S140, the first thread is controlled to perform face recognition processing on the video data by creating the first thread and the second thread, and the image after the face recognition processing is output and displayed on the Web end; and controlling a second thread to perform beautifying processing on the video data and outputting and displaying a beautifying image on a screen end. Carry out face identification and beautiful face processing to video data respectively through controlling first thread and second thread, divide into two routes dataflow with video data and carry out image processing respectively to realize under the prerequisite that does not influence face identification function, improve the display effect of image at the screen end, solved and can't compromise face identification and the problem of screen end display effect, improved user experience degree, thereby help improving customer satisfaction.
In some embodiments, acquiring the second configuration resource includes step S1401:
step S1401, after the first thread calls the ISP chip to obtain the first configuration resource, the second thread is controlled to call the ISP chip to obtain the second configuration resource.
It can be understood that, in the case of only one ISP chip, the time division multiplexing method is adopted to control the first thread to call the ISP chip to obtain the corresponding first configuration resource, and after obtaining the first configuration resource, the second thread is controlled to call the ISP chip to obtain the corresponding second configuration resource.
Specifically, a first configuration resource is obtained by controlling a call signal sent by a first thread to access an ISP chip, and after the first configuration resource is obtained, a second configuration resource is obtained by controlling a call signal sent by a second thread to access the ISP chip. At the same time, only one calling signal can access the ISP chip, otherwise, the ISP chip cannot distinguish a plurality of calling signals, so that the allocation of the configuration resources cannot be carried out.
Through the step S1401, a time-sharing multiplexing method is adopted, after the first thread calls the ISP chip to obtain the first configuration resource, the second thread is controlled to call the ISP chip to obtain the second configuration resource, so that the functions of face recognition and screen-side color beautifying can be realized by using one ISP chip, and the cost is saved on the premise of not affecting the function of the monitoring system.
In some embodiments, step S110 includes steps S111 to S112, wherein:
and step S111, controlling the video acquisition equipment to acquire video data.
In step S112, the video data is transmitted to the data buffer.
It should be noted that, the video data acquired by the video acquisition device is transmitted to the data buffer area, so that the video data can be conveniently acquired from the data buffer area, and the face recognition processing and the face beautification processing can be performed on the video data.
Through the steps S111 to S112, the video data acquired by the video acquisition device is transmitted to the data buffer area, so that the video data is acquired in the data buffer area, and the face recognition processing and the face beautifying processing are performed on the video data, thereby saving the data transmission time and improving the data transmission efficiency.
In some embodiments, fig. 2 is a flowchart of performing face recognition processing on video data in the embodiment of the present application, where the flow includes steps S210 to S240:
and step S210, controlling the first thread to acquire video data from the data buffer area and acquiring an image processing algorithm from the video acquisition equipment.
It can be understood that the first thread is controlled to interact with an algorithm layer of the video capture device to obtain an image processing algorithm carried by the video capture device from the video capture device, where the image processing algorithm includes, but is not limited to, an Automatic Exposure AE (AE) algorithm and an Automatic White Balance AWB (AWE) algorithm.
Step S220, processing the video data according to an image processing algorithm to obtain first processed data.
It will be appreciated that the video data is processed according to image processing algorithms to achieve a particular video data display effect. For example, the image is subjected to automatic exposure processing by an automatic exposure AE algorithm so that the brightness of the screen and the image quality are improved. And performing tone processing on images shot under different light rays through an automatic white balance AWB algorithm to improve the image quality.
Step S230, a face recognition algorithm is obtained from the first configuration resource.
Step S240, performing face recognition processing on the first processed data according to a face recognition algorithm.
Through the steps S210 to S240, the first thread is controlled to obtain the image processing algorithm from the video acquisition device to process the video data, so that the brightness and the image quality of the picture are improved, and then the first processed data obtained by the image processing algorithm is subjected to face recognition processing, so that the accuracy of face recognition can be improved.
In some embodiments, fig. 3 is a first flowchart of performing a beautifying process on video data in the embodiment of the present application, where the flowchart includes steps S310 to S320:
in step S310, the second thread is controlled to obtain video data from the data buffer.
Step S320, obtaining the beauty algorithm from the second configuration resource, and performing beauty processing on the video data according to the beauty algorithm.
It should be noted that the second thread does not interact with the algorithm layer of the video capture device, directly obtains the video data from the data buffer, and obtains the beauty algorithm from the second configuration resource, so as to perform beauty processing on the video data.
Specifically, the video data can be subjected to edge enhancement and spatial-domain temporal noise reduction processing to achieve the effect of peeling and removing spots. The video data can be subjected to brightness adjustment processing through the brightness adjustment module so as to achieve the effect of brightening the skin color. The video data can be subjected to face adjustment processing through the color filtering array so as to make the human face tender and saturated. The coordinates of the face characteristic values can be adjusted and processed through the acquired coordinates of the face characteristic values, and the background is repaired, so that the face thinning effect is achieved.
Through the steps S310 to S320, the second thread is controlled to obtain video data from the data buffer area, the beautifying algorithm is obtained from the second configuration resource, the video data is subjected to beautifying processing according to the beautifying algorithm, the effect of beautifying the screen end is achieved, the second thread is not in interactive communication with the algorithm layer of the video acquisition equipment, the video data is directly obtained from the data buffer area, automatic exposure and automatic white balance processing are not performed on the video data, only the video data is subjected to beautifying processing, and the problem that the style of an image displayed by the screen end is biased towards the approach of a face recognition algorithm is solved.
In some embodiments, fig. 4 is a second flowchart of performing a beautifying process on video data in the embodiment of the present application, where the flowchart includes steps S410 to S420:
step S410, controlling the second thread to receive the video data issued by the first thread.
It can be understood that the first thread may be controlled to notify the second thread of the acquired video data, and the second thread may be controlled to receive the video data issued by the first thread.
Step S420, obtaining a beauty algorithm from the second configuration resource, and performing beauty processing on the video data according to the beauty algorithm.
Through the steps S410 to S420, the second thread is controlled to receive the video data sent by the first thread, so that the data transmission time is saved, and the data transmission efficiency is improved. Meanwhile, the second thread is not in interactive communication with the algorithm layer of the video acquisition equipment, only the video data is subjected to beautifying processing, the problem that the style of the image displayed by the screen end is biased to the approach of the face recognition algorithm is solved, the display effect of the screen end is improved, and meanwhile, the data transmission efficiency is improved.
In some embodiments, fig. 5 is a flowchart illustrating outputting and displaying a beauty image on a screen in the embodiment of the present application, where the flowchart includes steps S510 to S520:
step S510, acquiring a display parameter of the screen.
Step S520, adjusting parameters of the beauty image according to the display parameters.
It should be noted that the display parameters include, but are not limited to, pixel pitch, resolution, refresh rate, and power consumption. Parameters of the cosmetic image include, but are not limited to, image size and resolution. And adjusting the parameters of the beautifying image according to the display parameters so as to adapt to screen ends of different sizes and types.
Through the steps S510 to S520, the problem of inconsistent display output effects of various screen ends is solved by obtaining the display parameters of the screen end and adjusting the parameters of the beauty image according to the display parameters to adapt to the screen ends with different sizes and types.
The embodiments of the present application are described and illustrated below by way of specific examples.
Fig. 6 is a flowchart of a screen end display method in an embodiment of the present application, where the flowchart includes the following steps:
step S601, controlling the video acquisition equipment to acquire video data and transmitting the video data to the data buffer area.
Step S602, the 2A program is run to create two sub-threads, and the sub-thread creating function is run to create the first thread and the second thread.
Step S603, according to the time-division multiplexing method, controlling the first thread to call the ISP chip to obtain a corresponding first configuration resource to run the first thread, controlling the first thread to obtain video data from the data buffer, and obtaining an image processing algorithm from the video capture device. The image processing algorithms include an automatic exposure AE algorithm and an automatic white balance AWB algorithm. And processing the video data according to an image processing algorithm to obtain first processing data. And acquiring a face recognition algorithm from the first configuration resource, performing face recognition processing on the first processing data according to the face recognition algorithm, and outputting and displaying the image subjected to the face recognition processing on a Web end through coding and decoding.
Step S604, after the first configuration resource is acquired, controlling the second thread to call the ISP chip to acquire a corresponding second configuration resource. And controlling the second thread to acquire the video data from the data buffer area, acquiring a beautifying algorithm from the second configuration resource, performing beautifying processing on the video data according to the beautifying algorithm, and outputting and displaying a beautifying image on a screen end.
And step S605, acquiring display parameters of the screen end, and adjusting parameters of the beautifying image according to the display parameters. The display parameters include pixel pitch and resolution. The parameters of the cosmetic image include image size and resolution.
Fig. 7 is a schematic diagram of image display effects of a screen end and a Web end in the embodiment of the present application, and as shown in fig. 7, comparing an effect of an image output and display at a front screen end and an effect of an image output and display at a rear screen end for beautifying, it can be seen that pores and fine lines of a face are invisible after beautifying, the skin is smooth, and except dark spots with large area and dark color, other skin defects such as dark spots and pockmarks can be well removed, and nevi are retained. After beautifying, the brightness of the face is improved, the face looks more white, the color richness is increased, the bright area of the face is highlighted, and the face looks glossy. The skin on the hue of the skin is slightly pink, and white skin and yellow skin look relatively ruddy and have good complexion. The color of the lip after beautifying is slightly pink than that of the lip before beautifying; the color of the lips becomes lighter after beautifying the face of the person with darker color.
It should be understood that, although the respective steps in the flowcharts of fig. 1 to 6 are sequentially shown as indicated by arrows, the steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps. For example, referring to fig. 1, the execution sequence of step S110 and step S120 may be interchanged, that is, step S110 may be executed first, and then step S120 may be executed; step S120 may be performed first, and then step S110 may be performed. For another example, in conjunction with fig. 6, the order of step S601 and step S602 may also be interchanged.
The present embodiment further provides a screen end display device, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the device is omitted here. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 8 is a block diagram of a structure of a screen-side display device according to an embodiment of the present application, and as shown in fig. 8, the screen-side display device includes:
a data acquisition module 810 for acquiring video data;
a thread creation module 820 for creating a first thread and a second thread;
the first processing module 830 is configured to control the first thread to acquire a corresponding first configuration resource, so as to operate the first thread to perform face recognition processing on the video data, and output and display an image after the face recognition processing on a Web end;
the second processing module 840 is configured to control the second thread to obtain a corresponding second configuration resource, so as to operate the second thread to perform a beautifying process on the video data, and output and display a beautifying image on the screen.
In some embodiments, the second processing module 840 includes a calling unit 841, configured to control the second thread to call the ISP chip to obtain the second configuration resource after the first thread calls the ISP chip to obtain the first configuration resource.
In some of these embodiments, the data acquisition module 810 includes a data acquisition unit 811 and a data buffer unit 812; wherein the content of the first and second substances,
the data acquisition unit 811 is used for controlling the video acquisition equipment to acquire video data;
a data buffer unit 812 for transmitting the video data to a data buffer.
In some embodiments, the first processing module 830 includes a first obtaining unit 831, a first processing unit 832, a second obtaining unit 833, and a second processing unit 834; wherein the content of the first and second substances,
a first obtaining unit 831, configured to control the first thread to obtain the video data from the data buffer, and obtain an image processing algorithm from the video capture device;
a first processing unit 832, configured to process the video data according to the image processing algorithm to obtain first processed data;
a second obtaining unit 833, configured to obtain a face recognition algorithm from the first configuration resource;
a second processing unit 834, configured to perform face recognition processing on the first processed data according to the face recognition algorithm.
In some of these embodiments, the second processing module 840 further includes an acquisition unit 842 and a processing unit 843; wherein the content of the first and second substances,
an obtaining unit 842, configured to control the second thread to obtain the video data from the data buffer;
the processing unit 843 is configured to obtain a beauty algorithm from the second configuration resource, and perform beauty processing on the video data according to the beauty algorithm.
In some embodiments, the obtaining unit 842 is further configured to control the second thread to receive the video data sent by the first thread.
In some embodiments, the second processing module 840 further includes a display parameter obtaining unit 845 and an image parameter adjusting unit 846; wherein the content of the first and second substances,
a display parameter acquiring unit 845, configured to acquire a display parameter of a screen end;
an image parameter adjusting unit 846, configured to adjust a parameter of the beauty image according to the display parameter.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In addition, the screen-side display method described in conjunction with fig. 1 in the embodiment of the present application may be implemented by a screen-side display device. Fig. 9 is a schematic hardware structure diagram of a screen-side display device according to an embodiment of the present application.
The screen-side display device may include a processor 91 and a memory 92 storing computer program instructions.
Specifically, the processor 91 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 95 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 95 may include a Hard Disk Drive (Hard Disk Drive, abbreviated HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 95 may include removable or non-removable (or fixed) media, where appropriate. The memory 95 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 95 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 95 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (earrom) or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended Data Output Dynamic Random Access Memory (EDODRAM), a Synchronous Dynamic Random Access Memory (SDRAM), and the like.
The memory 95 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by the processor 92.
The processor 91 realizes any one of the screen-side display methods in the above-described embodiments by reading and executing computer program instructions stored in the memory 92.
In some of these embodiments, the screen-side display device may also include a communication interface 93 and a bus 90. As shown in fig. 9, the processor 91, the memory 92, and the communication interface 93 are connected to each other via the bus 90 to complete communication therebetween.
The communication interface 93 is used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present application. The communication port 93 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The bus 90 includes hardware, software, or both that couple the components of the on-screen display device to one another. Bus 90 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 90 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a HyperTransport (HT) interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video electronics standards Association Local Bus (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 90 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The screen-side display device may execute the screen-side display method in the embodiment of the present application based on the acquired video data, thereby implementing the screen-side display method described with reference to fig. 1.
In addition, in combination with the screen end display method in the foregoing embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the screen-side display methods in the above embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A screen end display method is characterized by being applied to a monitoring system, wherein the monitoring system comprises a Web end and a screen end, the Web end represents a background display interface of the monitoring system, and the screen end represents a front end display interface of the monitoring system; the method comprises the following steps:
collecting video data;
creating a first thread and a second thread;
controlling the first thread to acquire a corresponding first configuration resource so as to operate the first thread to perform face recognition processing on the video data and output and display an image subjected to the face recognition processing on a Web end;
and controlling the second thread to acquire a corresponding second configuration resource so as to operate the second thread to perform beautifying processing on the video data and output and display a beautifying image on a screen end.
2. The screen-side display method of claim 1, wherein the controlling the second thread to obtain the corresponding second configuration resource comprises:
and after the first thread calls the ISP chip to obtain the first configuration resource, controlling the second thread to call the ISP chip to obtain the second configuration resource.
3. The on-screen display method of claim 1, wherein the capturing video data comprises:
controlling video acquisition equipment to acquire video data;
and transmitting the video data to a data buffer.
4. The screen-side display method according to claim 3, wherein the running the first thread to perform face recognition processing on the video data comprises:
controlling the first thread to acquire the video data from the data buffer area and acquire an image processing algorithm from the video acquisition equipment;
processing the video data according to the image processing algorithm to obtain first processing data;
acquiring a face recognition algorithm from the first configuration resource;
and carrying out face recognition processing on the first processing data according to the face recognition algorithm.
5. The on-screen display method of claim 3, wherein the running the second thread to perform a beautifying process on the video data comprises:
controlling the second thread to obtain the video data from the data buffer;
and acquiring a beautifying algorithm from the second configuration resource, and performing beautifying processing on the video data according to the beautifying algorithm.
6. The on-screen display method of claim 3, wherein the running the second thread to perform a beautifying process on the video data comprises:
controlling the second thread to receive video data issued by the first thread;
and acquiring a beautifying algorithm from the second configuration resource, and performing beautifying processing on the video data according to the beautifying algorithm.
7. The screen-end display method according to claim 1, wherein the outputting and displaying the beauty image on the screen end comprises:
acquiring display parameters of a screen end;
and adjusting the parameters of the beautifying image according to the display parameters.
8. A screen-side display device, comprising:
the data acquisition module is used for acquiring video data;
the thread creating module is used for creating a first thread and a second thread;
the first processing module is used for controlling the first thread to acquire a corresponding first configuration resource so as to operate the first thread to perform face recognition processing on the video data and output and display an image subjected to the face recognition processing on a Web end;
and the second processing module is used for controlling the second thread to acquire a corresponding second configuration resource so as to operate the second thread to perform the beautifying processing on the video data and output and display a beautifying image on a screen end.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the screen-side display method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the screen-side display method according to any one of claims 1 to 7.
CN202010442075.0A 2020-05-22 2020-05-22 Screen-side display method and device, computer equipment and computer-readable storage medium Pending CN111681179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442075.0A CN111681179A (en) 2020-05-22 2020-05-22 Screen-side display method and device, computer equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442075.0A CN111681179A (en) 2020-05-22 2020-05-22 Screen-side display method and device, computer equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111681179A true CN111681179A (en) 2020-09-18

Family

ID=72434302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442075.0A Pending CN111681179A (en) 2020-05-22 2020-05-22 Screen-side display method and device, computer equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111681179A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973487A (en) * 2022-05-13 2022-08-30 杭州魔点科技有限公司 Face detection method, system, device and medium based on dynamic buffing

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN105979243A (en) * 2015-12-01 2016-09-28 乐视致新电子科技(天津)有限公司 Processing method and device for displaying stereo images
CN106358003A (en) * 2016-08-31 2017-01-25 华中科技大学 Video analysis and accelerating method based on thread level flow line
CN106469291A (en) * 2015-08-19 2017-03-01 中兴通讯股份有限公司 Image processing method and terminal
CN109214303A (en) * 2018-08-14 2019-01-15 北京工商大学 A kind of multithreading dynamic human face based on cloud API is registered method
CN109769099A (en) * 2019-01-15 2019-05-17 三星电子(中国)研发中心 The detection method and device for personage's exception of conversing
CN109784157A (en) * 2018-12-11 2019-05-21 口碑(上海)信息技术有限公司 A kind of image processing method, apparatus and system
CN110020587A (en) * 2019-01-18 2019-07-16 阿里巴巴集团控股有限公司 Method, system, device and the equipment that identifying system intelligently raises speed
CN110191314A (en) * 2019-05-07 2019-08-30 百度在线网络技术(北京)有限公司 Camera data processing method, device and mobile unit based on android system
CN110278373A (en) * 2019-06-26 2019-09-24 Oppo广东移动通信有限公司 Image processor, image processing method, filming apparatus and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413270A (en) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 Method and device for image processing and terminal device
CN106469291A (en) * 2015-08-19 2017-03-01 中兴通讯股份有限公司 Image processing method and terminal
CN105979243A (en) * 2015-12-01 2016-09-28 乐视致新电子科技(天津)有限公司 Processing method and device for displaying stereo images
CN106358003A (en) * 2016-08-31 2017-01-25 华中科技大学 Video analysis and accelerating method based on thread level flow line
CN109214303A (en) * 2018-08-14 2019-01-15 北京工商大学 A kind of multithreading dynamic human face based on cloud API is registered method
CN109784157A (en) * 2018-12-11 2019-05-21 口碑(上海)信息技术有限公司 A kind of image processing method, apparatus and system
CN109769099A (en) * 2019-01-15 2019-05-17 三星电子(中国)研发中心 The detection method and device for personage's exception of conversing
CN110020587A (en) * 2019-01-18 2019-07-16 阿里巴巴集团控股有限公司 Method, system, device and the equipment that identifying system intelligently raises speed
CN110191314A (en) * 2019-05-07 2019-08-30 百度在线网络技术(北京)有限公司 Camera data processing method, device and mobile unit based on android system
CN110278373A (en) * 2019-06-26 2019-09-24 Oppo广东移动通信有限公司 Image processor, image processing method, filming apparatus and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973487A (en) * 2022-05-13 2022-08-30 杭州魔点科技有限公司 Face detection method, system, device and medium based on dynamic buffing
CN114973487B (en) * 2022-05-13 2024-04-30 杭州魔点科技有限公司 Face detection method, system, device and medium based on dynamic skin grinding

Similar Documents

Publication Publication Date Title
KR102149187B1 (en) Electronic device and control method of the same
US9565410B2 (en) Automatic white balance with facial color features as reference color surfaces
CN109639982A (en) A kind of image denoising method, device, storage medium and terminal
WO2018103244A1 (en) Live streaming video processing method, device, and electronic apparatus
CN108108415B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN107862653B (en) Image display method, image display device, storage medium and electronic equipment
CN111127591B (en) Image hair dyeing processing method, device, terminal and storage medium
CN108566516A (en) Image processing method, device, storage medium and mobile terminal
CN113329252B (en) Live broadcast-based face processing method, device, equipment and storage medium
CN107040726B (en) Double-camera synchronous exposure method and system
CN109919866B (en) Image processing method, device, medium and electronic equipment
CN107909686B (en) Method and device for unlocking human face, computer readable storage medium and electronic equipment
KR102207633B1 (en) Image photographing apparatus and control methods thereof
CN107360366B (en) Photographing method and device, storage medium and electronic equipment
CN112785488A (en) Image processing method and device, storage medium and terminal
CN107705279B (en) Image data real-time processing method and device for realizing double exposure and computing equipment
CN109688465A (en) Video source modeling control method, device and electronic equipment
CN105704395A (en) Shooting method and shooting device
CN111681179A (en) Screen-side display method and device, computer equipment and computer-readable storage medium
CN106357978B (en) Image output method, device and terminal
CN106375316B (en) A kind of method of video image processing and equipment
US20190205689A1 (en) Method and device for processing image, electronic device and medium
CN106127166A (en) A kind of augmented reality AR image processing method, device and intelligent terminal
CN112822413B (en) Shooting preview method, shooting preview device, terminal and computer readable storage medium
US20140307116A1 (en) Method and system for managing video recording and/or picture taking in a restricted environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination