CN104754332A - Smart wearing device video image transmitting method - Google Patents

Smart wearing device video image transmitting method Download PDF

Info

Publication number
CN104754332A
CN104754332A CN201510128785.5A CN201510128785A CN104754332A CN 104754332 A CN104754332 A CN 104754332A CN 201510128785 A CN201510128785 A CN 201510128785A CN 104754332 A CN104754332 A CN 104754332A
Authority
CN
China
Prior art keywords
frame
huffman
data
transmission method
worn device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510128785.5A
Other languages
Chinese (zh)
Inventor
冯兆伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen First Blue-Chip Science And Technology Ltd
Original Assignee
Shenzhen First Blue-Chip Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen First Blue-Chip Science And Technology Ltd filed Critical Shenzhen First Blue-Chip Science And Technology Ltd
Priority to CN201510128785.5A priority Critical patent/CN104754332A/en
Publication of CN104754332A publication Critical patent/CN104754332A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the field of smart wearing devices, in particular to a smart wearing device video image transmitting method. The method includes first quantifying 8 bit data of a file in a video format or an image format into 4 bit data, adopting half-self-adapting Huffman coding to conduct coding to obtain quantified data of I frame and P frame to obtain the I frame and the P frame of each section, quantifying, framing and reducing the I frame and the P frame of each section, quantifying the P frame and finally overlapping to the I frame. Before Huffman decoding or coding, quantified data are subjected to statistics and calculation to obtain a plurality of kinds of Huffman trees, and the plurality of Huffman trees are built in a Huffman coder and a Huffman decoder. By means of the method, the dynamic video and image displaying effect of an LED display of a smart wearing device are improved, internal memory use rate of the smart wearing device is increased, expenses of the internal memory of the smart wearing device is reduced, bandwidth required in the data transmission process is reduced, and the effect and the grade of products are improved.

Description

A kind of video pictures transmission method of Intelligent worn device
Technical field
The present invention relates to Intelligent worn device field, relate to a kind of video pictures transmission method of Intelligent worn device in particular.
Background technology
Present stage is prevailing based on the Intelligent worn device of bluetooth, can be divided into high-end peripheral hardware and low side peripheral hardware by framework grade.High-end peripheral hardware uses the hardware designs with smart mobile phone ad eundem, can independent operating complete operating system, has good Consumer's Experience and display effect, but easily popularizes not as good as low side peripheral hardware because of the reason of cost and flying power.
Low side peripheral hardware is generally 8 ~ 32bit single-chip microcomputer, also having is the single-chip microcomputer carrying Bluetooth function, arithmetic speed slow (being less than 100MHz), running memory very little (being less than 512Byte), display effect can only be that simple static images pushes and text importing, cannot accomplish (being greater than 20 frames) dynamic effect of the video playback grade the same with high-end devices.
This kind equipment is generally with the dot matrix screen of gray scale or OLED containing LED, display packing and the high-end mode based on LCD have very large different, and particularly LED has current saturation phenomenon, and 256 grades of general linear gradation show and there will be pole non-uniform phenomenon.
Summary of the invention
In order to overcome above-mentioned technical problem, object of the present invention aims to provide a kind of video pictures transmission method of Intelligent worn device, improve dynamic video and the image display effect of the light-emitting diode display of Intelligent worn device, and improve Intelligent worn device memory usage, reduce the expense of Intelligent worn device internal memory stack space, reduce bandwidth required in data transmission procedure, thus improve product effect and class.
To achieve these goals, the present invention takes following technical scheme:
A video pictures transmission method for Intelligent worn device, it is as described below:
S1, the file of video or picture format is carried out YUV decoding, Y-component exports original 8bit data;
S2,8bit data to be quantized;
S3, semi adaptive Huffman encoding is carried out to the quantized data transmitted from S2 step encode, obtain I frame and P frame quantized data;
S4, Hafman decoding is carried out to the packet transmitted from S3 step, obtain gradation data;
S5, I frame Hafman decoding is carried out to the gradation data of S4 step, obtain each section of I frame and P frame;
S6, each section of I frame of S5 step to be quantized and framing;
S7, the P frame of S5 step is carried out Hafman decoding reduction and quantized;
S8, be added to the quantification P frame of S7 step I frame;
S9, wait for next I frame or P frame;
In particular, before Hafman decoding or coding, the quantized data of S2 step is added up and calculates several Huffman tree, and several Huffman tree is built in huffman encoder and Huffman decoder in advance.
Further, before Hafman decoding or coding, the quantized data of S2 step is added up and calculates 20 kinds of Huffman trees.
Further, in S2 step, described 8bit data volume is changed into 4bit data.
Further, described video pictures transmission method adopts the brightness of gamma adjuster fine setting light-emitting diode display.
Further, the Hafman decoding of described S4, S5 and S7 step does not predict B frame, only calculates I frame and P frame.
Further, described I frame is complete frame.
Further, described P frame recording and a upper I frame or the changing value that synthesized on the basis of P frame.
Further, between described S4 step to S5 step, adopt the interlacing scan of I frame and I frame is transferred to Huffman decoder.
Beneficial effect of the present invention:
1, due in advance at huffman encoder or the built-in in advance Huffman tree of decoder, greatly reduce and use internal memory stack space and cost, improve system operations speed.
2, the 8bit data volume of original video or picture is changed into 4bit data, greatly reduce the required memory of Intelligent worn device.
3, because Hafman decoding does not predict B frame, only calculating I frame and P frame, when not affecting output video or picture, thus improving information loads ability and the arithmetic speed of Intelligent worn device.
4, owing to employing gamma adjuster, light-emitting diode display displays uniform brightness, no longer occurs luminance saturation problem.
5, owing to employing the method for class MPEG, when decoding processor is very weak, video data is equally compressed, when conservative, the compression ratio (after compression the front capacity of capacity/compression) of I frame is approximately 80%, P frame can reach 30%, and wherein the actual usage ratio of I frame controls 30%, and overall flow control is about 45%.In the ideal case, bulk flow can be controlled to about 25%.
6, owing to employing the interlacing scan of I frame, the measure that P frame quantization resolution reduces by half, the packet size of each transmission can accomplish original 50%, and the transmission EMS memory occupation of decoder end reduces by half, and this measure only has hundreds of to be very useful in the Intelligent worn device field of several kilobytes at internal memory.
Embodiment
The Intelligent worn device of existing low side generally adopts 8bit data transmission format, due to exist transfer of data needed for bandwidth high, take the memory size that Intelligent worn device is originally little in a large number, hardware is low, system configurations is bad, and the function causing the Intelligent worn device of this kind of low side to realize can only be that simple text importing or static images push.This is one of the present invention's problem that will overcome, and is also one of bright spot.
Existing high-end Intelligent worn device generally adopts the transformat of 8bit-32bit data, uses and the hardware designs of smart mobile phone ad eundem, can independent operating complete operating system, has good Consumer's Experience and display effect.But the thing followed is the rising of cost, the flying power of battery reduces, and its LED display often occurs saturation current.The present invention is not affecting under the function that high-end Intelligent worn device has, and overcome the problems referred to above, this is also one of bright spot of the present invention.
First will to solve video or the required large bandwidth of picture compression transmission and internal memory cost high in the present invention, the problem that utilance is low.
First, the file of video or picture format is carried out YUV decoding, Y-component exports original 8bit data, then 8bit data volume is changed into 4bit data, and is adopted by quantized data semi adaptive Huffman encoding to encode, and obtains I frame and P frame quantized data.Wherein, introduce key frame approach, be similar to the I/B/P frame of mpeg series, but compare low side due to decoding hardware, therefore remove B frame, leave I and P, I frame is complete frame, the changing value on P frame recording and a upper I frame or the basis of P frame of having synthesized.This reduces the decrement of packet, reduce the large bandwidth amount needed for transmitting procedure and decrease memory cost.And Hafman decoding does not predict B frame, only calculating I frame and P frame, when not affecting output video or picture, thus improving information loads ability and the arithmetic speed of Intelligent worn device.Owing to employing the method for class MPEG, when decoding processor is very weak, video data is equally compressed, when conservative, the compression ratio (after compression the front capacity of capacity/compression) of I frame is approximately 80%, P frame can reach 30%, and wherein the actual usage ratio of I frame controls 30%, and overall flow control is about 45%.In the ideal case, bulk flow can be controlled to about 25%.
And then, Hafman decoding is adopted to the quantized data bag obtaining I frame and P frame, obtains gradation data; After obtaining gradation data, again gradation data is carried out I frame Hafman decoding, obtain each section of I frame and P frame.Wherein, I frame removes DCT coding, introduces adaptive Huffman coding.General video compression is all through: DCT---quantizes---Huffman encoding.Decode procedure is inverse process, and by the same token, decoding hardware compares low side, removes DCT here, only leaves quantification and Huffman encoding.This reduces the decrement of packet, reduce the large bandwidth amount needed for transmitting procedure and decrease memory cost.Finally, carried out quantizing and framing by each section of I frame, P frame adopts Hafman decoding reduce and quantize and the I frame that is added to by P frame, waits for next I frame or P frame.
I frame is the maximum frame of volume, and the volume after Huffman encoding is approximately 50% ~ 95%.When undesirable, I frame is intactly be transferred to decoder substantially.Here there is the problem of a buffering area memory size, as long as there is this I frame to exist, we just can not reduce internal memory.In order to reduce internal memory, the present invention adopts the interlacing scan of I frame, and I frame is (or 3 times) transmission at twice, with time frame adding the sequence number of frame and segments are to identify each I frame fragment.Each I frame can transmit with the memory size of less than 50% substantially like this, the delay of the cost frame that has been exactly many or solve with fast transport twice (or 3 times).Owing to employing the interlacing scan of l frame, the measure that P frame quantization resolution reduces by half, the packet size of each transmission can accomplish original 50%, and the transmission EMS memory occupation of decoder end reduces by half, and this measure only has hundreds of to be very useful in the Intelligent worn device field of several kilobytes at internal memory.
Be worth stressing: before Hafman decoding or coding, the quantized data of S2 step is added up and calculates several Huffman tree, and several Huffman tree is built in huffman encoder and Huffman decoder in advance.
General Huffman encoding is all use static Huffman tree, and being equivalent to all video pictures is all sort with same symbol.But our color depth only has 16bit, the combination of two pixels usually run into is also few, and even major part is all side is entirely bright on the right of going out in the left side, or conversely.We just can define out multiple Huffman tree (symbol combination, static array does not account for stack space) like this.Here, we add up and calculate the next further packed data of 20 kinds of Huffman trees.
Its workflow is as follows:
(1) play exemplary video, all convert standard 4bit to, add up and calculate 20 kinds of Huffman trees;
(2) all built-in these 20 tables of encoder, do a sequence;
(3) during encoder encodes, 20 tables all do first compression, obtain the sequence number of the table of optimum efficiency.
(4) use the result of the table of best sequence number to transmit, inside frame, also will tell decoder this sequence number.
(5) decoder obtains Huffman tree array according to the sequence number of specifying, and has continued decoding.
Here prerequisite is that encoder has powerful operational capability (Huffman encoding consumes not very high), and our environment for use mobile phone or PC can reach.Due in advance at huffman encoder or the built-in in advance Huffman tree of decoder, greatly reduce and use memory headroom and cost, improve system operations speed.
As mentioned above, the quantizing process of I frame is as follows:
Definition background colour (0 ~ 255), the magnitude (1 ~ 16) of the changing value of definition present frame, based on background colour, each pixel gets the difference with background colour, uses 4bit to record difference divided by magnitude.When decoding, elder generation is full frame gets background colour, then the value of each 4bit is multiplied by magnitude, obtains actual difference.
If difference is d, magnitude is n, and actual record value is t
During coding: t=d/n
During decoding: d=t*n
Effectively can promote the gray scale resolution capability under different integral brightness like this.Such as picture be overall highlighted time, human eye is in fact insensitive to the point of low-light level, does not need very high resolution capability, just enough with 4bit (16 grades).But when picture becomes overall low-light level, the normally logo of some complexity, during people's eye fixation, pupil can amplify a bit accordingly, and careful magnitude can make picture finer and smoother.Coding side is depended in the selection of this magnitude, but our present mobile phone or PC, operational capability is enough.
The quantizing process of P frame is as follows:
Only there is difference in P frame, introduce colour dither algorithm, each difference re-uses the color depth less than I frame 4bit, is decided to be 2bit here.Employ the matrix multiple of [1,0] [0,1], even number point is 0, and the odd point of next line decides 1 or 0 according to offset.Colour dither belongs to a kind of Lossy Compression Algorithm, but uses the shake of 2bit as P frame, because basic details is all on I frame, can not there is very large loss of detail.
Namely Second Problem to be solved by this invention is the problem solving light-emitting diode display display effect.
General LED uses 8bit hardware PWM, and overall brightness is higher, but human eye is due to the scattering interference between tired and dot matrix, the 8bit-256 level gray scale that None-identified is conventional.Here, the present invention reduces the color depth of each point, and use 4bit color depth, 16 grades, the dynamic color effects needed for product can reach substantially.Further, video pictures transmission method adopts the brightness of gamma adjuster fine setting light-emitting diode display.Owing to employing gamma adjuster, light-emitting diode display displays uniform brightness, no longer occurs luminance saturation problem.
Above-mentioned listed specific implementation is nonrestrictive, and to one skilled in the art, not departing from the scope of the invention, the various modifications and variations of carrying out, all belong to protection scope of the present invention.

Claims (8)

1. a video pictures transmission method for Intelligent worn device, it is as described below:
S1, the file of video or picture format is carried out YUV decoding, Y-component exports original 8bit data;
S2,8bit data to be quantized;
S3, semi adaptive Huffman encoding is carried out to the quantized data transmitted from S2 step encode, obtain I frame and P frame quantized data;
S4, Hafman decoding is carried out to the packet transmitted from S3 step, obtain gradation data;
S5, I frame Hafman decoding is carried out to the gradation data of S4 step, obtain each section of I frame and P frame;
S6, each section of I frame of S5 step to be quantized and framing;
S7, the P frame of S5 step is carried out Hafman decoding reduction and quantized;
S8, be added to the quantification P frame of S7 step I frame;
S9, wait for next I frame or P frame;
It is characterized in that: before Hafman decoding or Huffman encoding, the quantized data of S2 step is added up and calculates several Huffman tree, and several Huffman tree is built in huffman encoder and Huffman decoder in advance.
2. the video pictures transmission method of a kind of Intelligent worn device according to claim 1, is characterized in that: before Hafman decoding or coding, added up by the quantized data of S2 step and calculate 20 kinds of Huffman trees.
3. the video pictures transmission method of a kind of Intelligent worn device according to claim 1, is characterized in that: in S2 step, and described 8bit data volume is changed into 4bit data.
4. the video pictures transmission method of a kind of Intelligent worn device according to claim 1, is characterized in that: described video pictures transmission method adopts the brightness of gamma adjuster fine setting light-emitting diode display.
5. the video pictures transmission method of a kind of Intelligent worn device according to claim 1, is characterized in that: the Hafman decoding of described S4, S5 and S7 step does not predict B frame, only calculates I frame and P frame.
6. the video pictures transmission method of a kind of Intelligent worn device according to claim 5, is characterized in that: described I frame is complete frame.
7. the video pictures transmission method of a kind of Intelligent worn device according to claim 5, is characterized in that: described P frame recording and a upper I frame or the changing value synthesized on the basis of P frame.
8. the video pictures transmission method of a kind of Intelligent worn device according to claim 1, is characterized in that: between described S4 step to S5 step, adopts the interlacing scan of I frame and I frame is transferred to Huffman decoder.
CN201510128785.5A 2015-03-24 2015-03-24 Smart wearing device video image transmitting method Pending CN104754332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510128785.5A CN104754332A (en) 2015-03-24 2015-03-24 Smart wearing device video image transmitting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510128785.5A CN104754332A (en) 2015-03-24 2015-03-24 Smart wearing device video image transmitting method

Publications (1)

Publication Number Publication Date
CN104754332A true CN104754332A (en) 2015-07-01

Family

ID=53593354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510128785.5A Pending CN104754332A (en) 2015-03-24 2015-03-24 Smart wearing device video image transmitting method

Country Status (1)

Country Link
CN (1) CN104754332A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450929A (en) * 2015-12-09 2016-03-30 安徽海聚信息科技有限责任公司 Information identification and communication method for smart wearable devices
CN108366263A (en) * 2018-01-11 2018-08-03 上海掌门科技有限公司 Video encoding/decoding method, equipment and storage medium
CN109068133A (en) * 2018-09-17 2018-12-21 鲍金龙 Video encoding/decoding method and device
CN114173183A (en) * 2021-09-26 2022-03-11 荣耀终端有限公司 Screen projection method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6294079A (en) * 1985-10-21 1987-04-30 Canon Inc Picture data transmission equipment
CN1617593A (en) * 2003-11-13 2005-05-18 微软公司 Signaling valid entry points in a video stream
CN101031091A (en) * 2006-02-28 2007-09-05 华为技术有限公司 Method and apparatus for correcting video-flow gamma characteristics of video telecommunication terminal
CN102483333A (en) * 2009-07-09 2012-05-30 通腾科技股份有限公司 Navigation device using map data with route search acceleration data
CN103997694A (en) * 2014-05-30 2014-08-20 深圳市华宝电子科技有限公司 Video backward-playing method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6294079A (en) * 1985-10-21 1987-04-30 Canon Inc Picture data transmission equipment
CN1617593A (en) * 2003-11-13 2005-05-18 微软公司 Signaling valid entry points in a video stream
CN101031091A (en) * 2006-02-28 2007-09-05 华为技术有限公司 Method and apparatus for correcting video-flow gamma characteristics of video telecommunication terminal
CN102483333A (en) * 2009-07-09 2012-05-30 通腾科技股份有限公司 Navigation device using map data with route search acceleration data
CN103997694A (en) * 2014-05-30 2014-08-20 深圳市华宝电子科技有限公司 Video backward-playing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
严剑: "Huffman算法及其在数据压缩中的应用", 《计算机与现代化》 *
于龙: "基于FPGA+DSP的H.264视频编解码系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450929A (en) * 2015-12-09 2016-03-30 安徽海聚信息科技有限责任公司 Information identification and communication method for smart wearable devices
CN108366263A (en) * 2018-01-11 2018-08-03 上海掌门科技有限公司 Video encoding/decoding method, equipment and storage medium
CN109068133A (en) * 2018-09-17 2018-12-21 鲍金龙 Video encoding/decoding method and device
CN109068133B (en) * 2018-09-17 2022-04-29 鲍金龙 Video decoding method and device
CN114173183A (en) * 2021-09-26 2022-03-11 荣耀终端有限公司 Screen projection method and electronic equipment

Similar Documents

Publication Publication Date Title
US10356444B2 (en) Method and apparatus for encoding and decoding high dynamic range (HDR) videos
KR102529013B1 (en) Method and apparatus for encoding and decoding color pictures
EP3632114B1 (en) Substream multiplexing for display stream compression
CN107431812B (en) For showing the complex region detection of stream compression
CN102812705B (en) image encoder and image decoder
US20180005357A1 (en) Method and device for mapping a hdr picture to a sdr picture and corresponding sdr to hdr mapping method and device
KR20180021869A (en) Method and device for encoding and decoding HDR color pictures
WO2015111467A1 (en) Transmission device, transmission method, receiving device and receiving method
CN101796843A (en) Image coding method, image decoding method, image coding device, image decoding device, program, and integrated circuit
CN107787581A (en) The metadata of the demarcation lighting condition of reference viewing environment for video playback is described
CN108353177A (en) For reducing the system and method for slice boundaries visual artifacts in compression DSC is flowed in display
CN108702513B (en) Apparatus and method for adaptive computation of quantization parameters in display stream compression
RU2683628C1 (en) Transmission device, transmission method, reception device and reception method
CN104754332A (en) Smart wearing device video image transmitting method
CN110741623B (en) Method and apparatus for gamut mapping
TW200744382A (en) Block truncation coding (BTC) method and apparatus
KR102185027B1 (en) Apparatus and method for vector-based entropy coding for display stream compression
JP2020145707A (en) Method and apparatus for processing image data
CN108881915B (en) Device and method for playing video based on DSC (differential scanning sequence) coding technology
US10200697B2 (en) Display stream compression pixel format extensions using subpixel packing
CN108886615A (en) The device and method of perception quantization parameter (QP) weighting for the compression of display stream
US10593257B2 (en) Stress profile compression
EP3557872A1 (en) Method and device for encoding an image or video with optimized compression efficiency preserving image or video fidelity
CN109326251A (en) Image data compression method and sequence controller
KR20160058153A (en) System and method for reducing visible artifacts in the display of compressed and decompressed digital images and video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150701

WD01 Invention patent application deemed withdrawn after publication