多媒体相关术语

来源:百度文库 编辑:神马文学网 时间:2024/04/27 14:31:50
多媒体相关术语

8-VSB
See vestigial sideband.

AC-3
An early name for Dolby Digital.

AC‘97, AC‘98
These are definitions by Intel for the audio I/O implementation for PCs. Two chips are defined: an analog audio I/O chip and a digital controller chip. The digital chip will eventually be replaced by a software solution. The goal is to increase the audio performance of PCs and lower cost.

AC Coupled
AC coupling passes a signal through a capacitor to remove any DC offset, or the overall voltage level that the video signal "rides" on. One way to find the signal is to remove the DC offset by AC coupling, and then do DC restoration to add a known DC offset (one that we selected). Another reason AC coupling is important is that it can remove large (and harmful) DC offsets.

Active Video
The part of the video waveform that contains picture information. Most of the active video, if not all of it, is visible on the display.

ADC, A/D
Analog-to-Digital Converter. This device is used to digitize audio and video. An ADC for digitizing video must be capable of sampling at 10 to 150 million samples per second (MSPS).

AFC
See automatic frequency control.

AGC
See automatic gain control.

Alpha
See alpha channel and alpha mix.

Alpha Channel
The alpha channel is used to specify an alpha value for each video sample. The alpha value is used to control the blending, on a sample-by-sample basis, of two images.


new pixel = (alpha)(sample A color) + (1 - alpha)(sample B color)

Alpha typically has a normalized value of 0 to 1. In a graphics environment, the alpha values can be stored in additional memory. When you hear about 32-bit frame buffers, what this really means is that there are 24 bits of color, 8 each for red, green, and blue, along with an 8-bit alpha channel. Also see alpha mix.

Alpha Mix
This is a way of combining two images. How the mixing is performed is provided by the alpha channel. The little box that appears over the left-hand shoulder of a news anchor is put there by an alpha mixer. Wherever the little box is to appear, a "1" is put in the alpha channel. Wherever it doesn‘t appear, a "0" is used. When the alpha mixer sees a "1" coming from the alpha channel, it displays the little box. Whenever it sees a "0", it displays the news anchor. Of course, it doesn‘t matter if a "1" or a "0" is used, but you get the point.

AM, Amplitude Modulation
A method of encoding data onto a carrier, such that the amplitude of the carrier is proportional to the data value.

Ancillary Timecode
BT.1366 defines how to transfer VITC and LTC as ancillary data in digital component interfaces.

Anti-Alias Filter
A lowpass filter used to bandwidth-limit a signal to less than one-half the sampling rate.

Aperture Delay
Aperture delay is the time from an edge of the input clock of the ADC until the time the ADC actually takes the sample. The smaller this number, the better.

Aperture Jitter
The uncertainty in the aperture delay. This means the aperture delay time changes a little bit each time, and that little bit of change is the aperture jitter.

Artifacts
In the video domain, artifacts are blemishes, noise, snow, spots, whatever. When you have an image artifact, something is wrong with the picture from a visual standpoint. Don‘t confuse this term with not having the display properly adjusted. For example, if the hue control is set wrong, the picture will look bad, but this is not an artifact. An artifact is some physical disruption of the image.

Aspect Ratio
The ratio of the width of the picture to the height. Displays commonly have a 4:3 or 16:9 aspect ratio. Program material may have other aspect ratios (such as 20:9), resulting in it being "letterboxed" on the display.

Asynchronous
Refers to circuitry without a common clock or timing signal.

ATC
See ancillary timecode.

ATSC
Advanced Television Systems Committee. They defined the SDTV and HDTV standards for the United States, using MPEG-2 for video and Dolby Digital for audio. Other countries are also adopting the ATSC HDTV standard.

ATSC A/49
Defines the ghost cancellation reference signal for NTSC. Download the specification.

ATSC A/52
Defines the (Dolby Digital) audio compression for ATSC HDTV. Download the specification.

ATSC A/53, A/54
Defines ATSC HDTV for the USA. Download the A/53 and A/54 specifications.

ATSC A/57
Defines the program, episode, and version ID for ATSC HDTV. Download the specification.

ATSC A/63
Defines the method for handling 25 and 50 Hz video for ATSC HDTV. Download the specification.

ATSC A/65
Defines the program and system information protocol for ATSC HDTV. Download the specification.

ATSC A/70
Defines the conditional access system for ATSC HDTV. Download the specification.

ATSC A/90
Defines the data broadcast standard for ATSC HDTV. Download the specification.

ATSC A/92
Defines the IP multicast standard for ATSC HDTV. Download the specification.

Audio Modulation
Refers to modifying an audio subcarrier with audio information so that it may be mixed with the video information and transmitted.

Audio Subcarrier
A specific frequency that is modulated with audio data.

Automatic Frequency Control (AFC)
A technique to lock onto and track a desired frequency.

Automatic Gain Control (AGC)
A circuit that has a constant output amplitude, regardless of the input amplitude.

Back Porch
The portion of the video waveform between the trailing edge of the horizontal sync and the start of active video.

Bandpass Filter
A circuit that allows only a selected range of frequencies to pass through.

Bandwidth (BW)
The range of frequencies a circuit will respond to or pass through. It may also be the difference between the highest and lowest frequencies of a signal.

Bandwidth Segmented Orthogonal Frequency Division Multiplexing
BST-OFDM attempts to improve on COFDM by modulating some OFDM car­riers differently from others within the same multiplex. A given transmission channel may therefore be "segmented", with different segments being modu­lated differently.

Baseband
When applied to audio and video, baseband means an audio or video signal that is not modulated onto another carrier (such as RF modulated to channel 3 or 4 for example). In DTV, baseband also may refer to the basic (unmodulated) MPEG-2 program or system stream.

BBC
British Broadcasting Corporation.

BITC
Burned-In Time Code. The timecode information is displayed within a portion of the picture, and may be viewed on any monitor or TV.

Black Burst
Black burst is a composite video signal with a totally black picture. It is used to synchronize together video equipment so the video outputs are aligned. Black burst tells the video equipment the vertical sync, horizontal sync, and the chroma burst timing.

Black Level
This level represents the darkest an image can get, defining what black is for a particular video system. If for some reason the video goes below this level, it is referred to as blacker-than-black. You could say that sync is blacker-than-black.

Blanking
On a CRT display, the scan line moves from the left edge to the right edge, jumps back to the left edge, and starts out all over again, on down the screen. When the scan line hits the right side and is about to be brought back to the left side, the video signal is blanked so that you can‘t "see" the return path of the scan beam from the right to the left-hand edge. To blank the video signal, the video level is brought down to the blanking level, which below the black level if a pedestal is used.

Blanking Level
That level of the video waveform defined by the system to be where blanking occurs. This could be the black level if a pedestal is not used or below the black level if a pedestal is used.

Blooming
This is an effect, sometimes caused when video becomes whiter-than-white, in which a line that is supposed to be nice and thin becomes fat and fuzzy on the screen.

Breezeway
That portion of the video waveform between the trailing edge of horizontal sync and the start of color burst.

Brightness
This refers to how much light is emitted by the display, and is controlled by the intensity of the video level.

BS.707
This ITU recommendation specifies the stereo audio specifications (Zweiton and NICAM 728) for the PAL and SECAM video standards. Purchase the specification.

BST-OFDM
See Bandwidth Segmented Orthogonal Frequency Division Multiplexing.

BT.470
Specifies the various NTSC, PAL, and SECAM video standards used around the world. SMPTE 170M also specifies the (M) NTSC video standard used in the United States. BT.470 has replaced BT.624. Purchase the specification.

BT.601
720 x 480 (59.94 Hz), 960 x 480 (59.94 Hz), 720 x 576 (50 Hz), and 960 x 576 (50 Hz) 4:2:2 YCbCr pro-video interlaced standards. Purchase the specification.

BT.653
Defines the various teletext standards used around the world. Systems A, B, C, and D for both 525-line and 625-line TV systems are defined. Purchase the specification.

BT.656
Defines a parallel interface (8-bit or 10-bit, 27 MHz) and a serial interface (270 Mbps) for the transmission of 4:3 BT.601 4:2:2 YCbCr digital video between pro-video equipment. Purchase the specification. Also see SMPTE 125M.

BT.709
This ITU recommendation specifies the 1920 x 1080 RGB and 4:2:2 YCbCr interlaced and progressive 16:9 digital video standards. Frame refresh rates of 60, 59.94, 50, 30, 29.97, 25, 24, and 23.976 Hz are supported. Purchase the specification.

BT.799
Defines the transmission of 4:3 BT.601 4:4:4:4 YCbCrK and RGBK digital video between pro-video equipment. Two parallel interfaces (8-bit or 10-bit, 27 MHz) or two serial interfaces (270 Mbps) are used. Purchase the specification.

BT.1119
Defines the widescreen signaling (WSS) information for NTSC and PAL video signals. For (B, D, G, H, I) PAL systems, WSS may be present on line 23, and on lines 22 and 285 for (M) NTSC. Purchase the ITU specification.

BT.1124
Defines the ghost cancellation reference (GCR) signal for NTSC and PAL. Purchase the specification.

BT.1197
Defines the PALplus standard, allowing the transmission of 16:9 programs over normal PAL transmission systems. Purchase the specification.

BT.1302
Defines the transmission of 16:9 BT.601 4:2:2 YCbCr digital video between pro-video equipment. It defines a parallel interface (8-bit or 10-bit, 36 MHz) and a serial interface (360 Mbps). Purchase the specification.

BT.1303
Defines the transmission of 16:9 BT.601 4:4:4:4 YCbCrK and RGBK digital video between pro-video equipment. Two parallel interfaces (8-bit or 10-bit, 36 MHz) or two serial interfaces (360 Mbps) are used. Purchase the specification.

BT.1304
Specifies the checksum for error detection and status for pro-video digital interfaces. Purchase the specification.

BT.1305
Specifies the digital audio format for ancillary data for pro-video digital interfaces. Purchase the specification. Also see SMPTE 272M.

BT.1358
720 x 480 (59.94 Hz) and 720 x 576 (50 Hz) 4:2:2 YCbCr pro-video progressive standards. Purchase the specification. Also see SMPTE 293M.

BT.1362
Pro-video serial interface for the transmission of BT.1358 digital video between equipment. Two 270 Mbps serial interfaces are used. Purchase the specification.

BT.1364
Specifies the ancillary data packet format for pro-video digital interfaces. Purchase the specification. Also see SMPTE 291M.

BT.1365
Specifies the 24-bit digital audio format for pro-video HDTV serial interfaces. Purchase the specification. Also see SMPTE 299M.

BT.1366
Specifies the transmission of timecode as ancillary data for pro-video digital interfaces. Purchase the specification. Also see SMPTE 266M.

BTSC

This EIA TVSB5 standard defines a technique of implementing stereo audio for NTSC video. One FM subcarrier transmits a L+R signal, and an AM subcarrier transmits a L-R signal.

Burst
See color burst.

Burst Gate
This is a signal that tells a video decoder where the color burst is located within the scan line.

B‘-Y‘
The blue-minus-luma signal, also called a color difference signal. When added to the luma (Y‘) signal, it produces the blue video signal.

Carrier
A frequency that is modulated with data to be transmitted.

CATV
Community antenna television, now generally meaning cable TV.

CBC
Canadian Broadcasting Corporation.

CBR
Abbreviation for constant bit rate.

CCIR
Comite Consultatif International des Radiocommunications or International Radio Consultative Committee. The CCIR no longer exists-it has been absorbed into the parent body, the ITU. For a given "CCIR xxx" specification, see "BT.xxx".

CGMS-A
Copy Generation Management System - Analog (CGMS-A). See EIA-608.

Chaoji VideoCD
Another name for Super VideoCD.

Checksum
An error-detecting scheme which is the sum of the data values transmitted. The receiver computes the sum of the received data values and compares it to the transmitted sum. If they are equal, the transmission was error-free.

Chroma
The NTSC, PAL, or SECAM video signal contains two parts that make up what you see on the display: the intensity part, and the color part. Chroma is the color part.

Chroma Bandpass
In a NTSC or PAL video signal, the luma (black and white) and the chroma (color) information are combined together. If you want to decode an NTSC or PAL video signal, the luma and chroma must be separated. A chroma bandpass filter removes the luma from the video signal, leaving the chroma relatively intact. This works reasonably well except in images where the luma and chroma information overlap, meaning that we have luma and chroma stuff at the same frequency. The filter can‘t tell the difference between the two and passes everything. This can make for a funny-looking picture. Next time you‘re watching TV and someone is wearing a herringbone jacket or a shirt with thin, closely spaced stripes, take a good look. You may see a rainbow color effect moving through that area. What‘s happening is that the video decoders thinks that the luma is chroma. Since the luma isn‘t chroma, the video decoder can‘t figure out what color it is and it shows up as a rainbow pattern. This problem can be overcome by using a comb filter.

Chroma Burst
See color burst.

Chroma Demodulator
After the NTSC or PAL video signal makes its way through the Y/C separator, the colors must be decoded. That‘s what a chroma demodulator does. It takes the chroma output of the Y/C separator and recovers two color difference signals (typically I and Q or U and V). Now, with the luma information and two color difference signals, the video system can figure out what colors to display.

Chroma Key
This is a method of combining two video images. An example of chroma keying in action is the nightly news person standing in front of a giant weather map. In actuality, the person is standing in front of a blue or green background and their image is mixed with a computer-generated weather map. This is how it works: a TV camera is pointed at the person and fed along with the image of the weather map into a box. Inside the box, a decision is made. Wherever it sees the blue or green background, it displays the weather map. Otherwise, it shows the person. So, whenever the person moves around, the box figures out where he is, and displays the appropriate image.

Chroma Trap
In a NTSC or PAL video signal, the luma (black and white) and the chroma (color) information are combined together. If you want to decode the video signal, the luma and chroma must be separated. The chroma trap is one method for separating the chroma from the luma, leaving the luma relatively intact. How does it work? The NTSC or PAL signal is fed to a trap filter. For all practical purposes, a trap filter allows certain frequencies to pass through, but not others. The trap filter is designed with a response to remove the chroma so that the output of the filter only contains the luma. Since this trap stops chroma, it‘s called a chroma trap. The sad part about all of this is that not only does the filter remove chroma, it removes luma as well if it exists within the frequencies where the trap exists. The filter only knows ranges and, depending on the image, the luma information may overlap the chroma information. The filter can‘t tell the difference between the luma and chroma, so it traps both when they are in the same range. What‘s the big deal? Well, you lose luma and this means that the picture is degraded somewhat. Using a comb filter for a Y/C separator is better than a chroma trap or chroma bandpass.

Chrominance
In video, the terms chrominance and chroma are commonly (and incorrectly) interchanged. See the definition of chroma.

CIF
Common Interface Format or Common Image Format. The Common Interface Format was developed to support video conferencing. It has an active resolution of 352 x 288 and a refresh rate of 29.97 frames per second. The High-Definition Common Image Format (HD-CIF) is used for HDTV production and distribution, having an active resolution of 1920 x 1080 with a frame refresh rate of 23.976, 24, 29.97, 30, 50, 59.94, or 60 Hz.

Clamp
This is basically another name for the DC-restoration circuit. It can also refer to a switch used within the DC-restoration circuit. When it means DC restoration, then it‘s usually used as "clamping". When it‘s the switch, then it‘s just "clamp".

Clipping Logic
A circuit used to prevent illegal conversion. Some colors can exist in one color space but not in another. Right after the conversion from one color space to another, a color space converter might check for illegal colors. If any appear, the clipping logic is used to limit, or clip, part of the information until a legal color can be represented. Since this circuit clips off some information and is built using logic, it‘s not too hard to see how the name "clipping logic" was developed.

Closed Captioning
A service which decodes text information transmitted with the video signal and displays it on the display. For NTSC, the caption signal may be present on lines 21 and 284. For PAL, the caption signal may be present on lines 22 and 334. See the EIA-608 specification for (M) NTSC usage of closed captioning and the EIA-708 specification for DTV support.

For MPEG-2 video, including ATSC and DVB, the closed caption data are multiplexed as a separate data stream within the MPEG-2 bitstream. It may use the picture layer user_data bits as specified by EIA-708, or in PES packets (private_stream_1) as specified by ETSI EN 301 775.

For DVD, caption data may be 8-bit user_data in the group_of_pictures header (525/60 systems), a digitized caption signal (quantized to 16 levels) that is processed as normal video data (625/50 systems), or a subpicture that is simply decoded and mixed with the decoded video.

Closed Subtitles
See subtitles.

CMYK
This is a color space primarily used in color printing. CMYK is an acronym for Cyan, Magenta, Yellow, and blacK. The CMYK color space is subtractive, meaning that cyan, magenta, yellow and black pigments or inks are applied to a white surface to remove color information from the white surface to create the final color. The reason black is used is because even if a printer could print hues of cyan, magenta, and yellow inks perfectly enough to make black (which it can‘t for large areas), it would be too expensive since colored inks cost more than black inks. So, when black is used, instead of putting down a lot of CMY, they just use black.

Coded Orthogonal Frequency Division Multiplexing
Coded orthogonal frequency division multiplexing, or COFDM, transmits dig­ital data differently than 8-VSB or other single-carrier approaches. Frequency division multiplexing means that the data to be transmitted is distributed over many carriers (1,705 or 6,817 for DVB-T, as opposed to modulating a single carrier. Thus, the data rate on each COFDM carrier is much lower than that required of a single carrier. The COFDM carriers are orthogonal, or mutually perpendicular, and forward error correction ("coded") is used.

COFDM is a multiplexing technique rather than a modulation technique. One of any of the common modulation methods, such as QPSK, 16-QAM or 64-QAM, is used to modulate the COFDM carriers.

COFDM
See coded orthogonal frequency division multiplexing.

Color Bars
This is a test pattern used to check whether a video system is calibrated correctly. A video system is calibrated correctly if the colors are the correct brightness, hue, and saturation. This can be checked with a vectorscope.

Color Burst
A waveform of a specific frequency and amplitude that is positioned between the trailing edge of horizontal sync and the start of active video. The color burst tells the color decoder how to decode the color information contained in that line of active video. By looking at the color burst, the decoder can determine what‘s blue, orange, or magenta. Essentially, the decoder figures out what the correct color is.

Color Decoder
See chroma demodulator.

Color Demodulator
See chroma demodulator.

Color Difference
All of the color spaces used in color video require three components. These might be R‘G‘B‘, Y‘IQ, Y‘UV or Y‘(R‘ - Y‘)(B‘ - Y‘). In the Y‘(R‘ - Y‘)(B‘ - Y‘) color space, the R‘ - Y‘ and B‘ - Y‘ components are often referred to as color difference signals for obvious reasons. They are made by subtracting the luma (Y‘) from the red and blue components. I and Q and U and V are also color difference signals since they are scaled versions of R‘ - Y‘ and B‘ - Y‘. All the Ys in each of the Y‘IQ, Y‘UV and Y‘(R‘ - Y‘)(B‘ - Y‘) are basically the same, although they are slightly different between SDTV and HDTV.

Color Edging
Extraneous colors that appear along the edges of objects, but don‘t have a color relationship to those areas.

Color Encoder
The color encoder does the exact opposite of the color decoder. It takes two color difference signals, such as I and Q or U and V, and combines them into a chroma signal.

Color Key
This is essentially the same thing as chroma key.

Color Killer
A color killer is a circuit that shuts off the color decoding if the incoming video does not contain color information. How does this work? The color killer looks for the color burst and if it can‘t find it, it shuts off the color decoding. For example, let‘s say that a color TV is going to receive material recorded in black and white. Since the black and white signal does not contain a color burst, the color decoding is shut off. Why is a color killer used? Well, in the old days, the color decoder would still generate a tiny little bit of color if a black and white transmission was received, due to small errors in the color decoder, causing a black and white program to have faint color spots throughout the picture.

Color Modulator
See color encoder.

Color Purity
This term is used to describe how close a color is to the theoretical. For example, in the Y‘UV color space, color purity is specified as a percentage of saturation and +/-q, where q is an angle in degrees, and both quantities are referenced to the color of interest. The smaller the numbers, the closer the actual color is to the color that it‘s really supposed to be. For a studio-grade device, the saturation is +/-2% and the hue is +/-2 degrees. On a vectorscope, if you‘re in that range, you‘re studio quality.

Color Space
A color space is a mathematical representation for a color. No matter what color space is used -- RGB, Y‘IQ, Y‘UV, etc. -- orange is still orange. What changes is how you represent orange. For example, the RGB color space is based on a Cartesian coordinate system and the HSI color space is based on a polar coordinate system.

ColorStream, ColorStream Pro, ColorStream HD
The name Toshiba uses for the analog YPbPr video interface on their consumer equipment. If the interface supports progressive SDTV resolutions, it is called ColorStream Pro. If the interface supports HDTV resolutions, it is called ColorStream HD.

Color Subcarrier
The color subcarrier is a signal used to control the color encoder or color decoder. For (M) NTSC the frequency of the color subcarrier is about 3.58 MHz and for (B, D, G, H, I) PAL it‘s about 4.43 MHz. In the color encoder, a portion of the color subcarrier is used to create the color burst, while in the color decoder, the color burst is used to reconstruct a color subcarrier.

Color Temperature
Color temperature is measured in degrees Kelvin. If a TV has a color temperature of 8,000 degrees Kelvin, that means the whites have the same shade as a piece of pure carbon heated to that temperature. Low color temperatures have a shift towards red; high color temperatures have a shift towards blue.

The standard for video is 6,500 degrees Kelvin. Thus, professional TV monitors use a 6,500-degree color temperature. However, most consumer TVs have a color temperature of 8,000 degrees Kelvin or higher, resulting in a bluish cast. By adjusting the color temperature of the TV, more accurate colors are produced, at the expense of picture brightness.

Comb Filter
This is another method of performing Y/C separation. A comb filter is used in place of a chroma bandpass or chroma trap. The comb filter provides better video quality since it does a better job of separating the luma from chroma. It reduces the amount of creepy-crawlies or zipper artifacts. It‘s called a comb filter because the frequency response looks like a comb. The important thing to remember is that the comb filter is a better method for Y/C separation than chroma bandpass or chroma trap.

Common Image Format
See CIF.

Common Interface Format
See CIF.

Component Video
Video using three separate color components, such as digital Y‘CbCr, analog Y‘PbPr, or R‘G‘B‘.

Composite Video
A single video signal that contains brightness, color, and timing information. If a video system is to receive video correctly, it must have several pieces of the puzzle in place. It must have the picture that is to be displayed on the screen, and it must be displayed with the correct colors. This piece is called the active video. The video system also needs information that tells it where to put each pixel. This is called sync. The display needs to know when to shut off the electron beam so the viewer can‘t see the spot retrace across the CRT display. This piece of the video puzzle is called blanking. Now, each piece could be sent in parallel over three separate connections, and it would still be called video and would still look good on the screen. This is a waste, though, because all three pieces can be combined together so that only one connection is needed. Composite video is a video stream that combines all of the pieces required for displaying an image into one signal, thus requiring only one connection. NTSC and PAL are examples of composite video. Both are made up of active video, horizontal sync, horizontal blanking, vertical sync, vertical blanking, and color burst. RGB is not an example of composite video, even though each red, green, and blue signal may contain sync and blanking information, because all three signals are required to display the picture with the right colors.

Compression Ratio
Compression ratio is a number used to tell how much information is squeezed out of an image when it has been compressed. For example, suppose we start with a 1 MB image and compress it down to 128 KB. The compression ratio would be:


1,048,576 / 131,072 = 8

This represents a compression ratio of 8:1; 1/8 of the original amount of storage is now required. For a given compression technique - MPEG, for example - the higher the compression ratio, the worse the image looks. This has nothing to do with which compression method is better, for example JPEG vs. MPEG. Rather, it depends on the application. A video stream that is compressed using MPEG at 100:1 may look better than the same video stream compressed to 100:1 using JPEG.

Conditional Access
This is a technology by which service providers enable subscribers to decode and view content. It consists of key decryption (using a key obtained from changing coded keys periodically sent with the content) and descrambling. The decryption may be proprietary (such as Canal+, DigiCipher, Irdeto Access, Nagravision, NDS, Viaccess, etc.) or standardized, such as the DVB common scrambling algorithm and OpenCable. Conditional access may be thought of as a simple form of digital rights management.

Constant Bit Rate
Constant bit rate (CBR) means that a bitstream (compressed or uncompressed) has the same number of bits each second.

Contouring
This is an image artifact caused by not having enough bits to represent the image. The reason the effect is called "contouring" is because the image develops vertical bands of brightness.

Contrast
A video term referring to how far the whitest whites are from the blackest blacks in a video waveform. If the peak white is far away from the peak black, the image is said to have high contrast. With high contrast, the image is very stark and very "contrasty", like a black-and-white tile floor. If the two are very close to each other, the image is said to have poor, or low, contrast. With poor contrast, an image may be referred to as being "washed out" -- you can‘t tell the difference between white and black, and the image looks gray.

Creepy Crawlies
Yes, this is a real video term! Creepy-crawlies refers to a specific image artifact that is a result of the NTSC system. When the nightly news is on, and a little box containing a picture appears over the anchorperson‘s shoulder, or when some computer-generated text shows up on top of the video clip being shown, get up close to the TV and check it out. Along the edges of the box, or along the edges of the text, you‘ll notice some jaggies "rolling" up (or down) the picture. That‘s the creepy-crawlies. Some people refer to this as zipper because it looks like one.

Cross Color
This occurs when the video decoder incorrectly interprets high-frequency luma information (brightness) to be chroma information (color), resulting in color being displayed where it shouldn‘t.

Cross Luma
This occurs when the video decoder incorrectly interprets chroma information (color) to be high-frequency luma information (brightness).

Cross Modulation
A condition when one signal erroneously modulates another signal.

Crosstalk
Interference from one signal that is detected on another.

DAC, D/A
These are short for digital-to-analog converter.

DAVIC
Abbreviation for Digital Audio Visual Council. It‘s goal was to create an industry standard for the end-to-end interoperability of broadcast and interactive digital audio-visual information, and of multimedia communication. The specification is now ISO/IEC 16500 (normative part) and ITR 16501 (informative part).

dB
Abbreviation for decibels, a standard unit for expressing relative power, voltage, or current.

dBm
Measure of power in communications. 0 dBm = 1 mW, with a logarithmic relationship as the values increase or decrease. In a 50-ohm system, 0 dBm = 0.223 volts.

dBw
Decibels referenced to 1 watt.

DC Restoration
DC restoration is what you have to do to a video signal after it has been AC-coupled and has to be digitized. Since the video waveform has been AC-coupled, we no longer know absolutely where it is. For example, is the bottom of the sync tip at -5v or at 1v? In fact, not only don‘t we know where it is, it also changes over time, since the average voltage level of the active video changes over time. Since the ADC requires a known input level and range to work properly, the video signal needs to be referenced to a known DC level. DC restoration essentially adds a known DC level to an AC-coupled signal. In decoding video, the DC level used for DC restoration is usually such that when the sync tip is digitized, it will be generate the number 0.

DCT
This is short for Discrete Cosine Transform, used in the MPEG, H.261, and H.263 video compression algorithms.

Decibel
One-tenth of a Bel, used to define the ratio of two powers, voltages, or currents, in terms of gains or losses. It is 10x the log of the power ratio and 20x the voltage or current ratio.

Decimation
When a video signal is digitized so that 100 samples are produced, but only every other one is stored or used, the signal is decimated by a factor of 2:1. The image is now 1/4 of its original size, since 3/4 of the data is missing. If only one out of five samples were used, then the image would be decimated by a factor of 5:1, and the image would be 1/25 its original size. Decimation, then, is a quick-and-easy method for image scaling.

Decimation can be performed in several ways. One way is the method just described, where data is literally thrown away. Even though this technique is easy to implement and cheap, it introduces aliasing artifacts. Another method is to use a decimation filter, which reduces the aliasing artifacts, but is more costly to implement.

Decimation Filter
A decimation filter is a lowpass filter designed to provide decimation without the aliasing artifacts associated with simply throwing data away.

De-emphasis
Also referred to as post-emphasis and post-equalization. Deemphasis performs a frequency-response characteristic that is complementary to that introduced by pre-emphasis.

De-emphasis Network
A circuit used to restore a frequency response to its original form.

Demodulation
The process of recovering an original signal from a modulated carrier.

Demodulator
In NTSC and PAL video, demodulation is the technique used to recover the color difference signals. See the definitions for Chroma Demodulator and Color Decoder; these are two other names for the demodulator used in NTSC/PAL video applications.

Demodulation is also used after DTV tuners to convert the transmitted DTV signal to a baseband MPEG-2 stream.

Differential Gain
Differential gain is how much the color saturation changes when the luma level changes (it isn‘t supposed to). For a video system, the better the differential gain -- that is, the smaller the number specified -- the better the system is at figuring out the correct color.

Differential Phase
Differential phase is how much the hue changes when the luma level changes (it isn‘t supposed to). For a video system, the better the differential phase -- that is, the smaller the number specified -- the better the system is at figuring out the correct color.

Digital 8
Digital 8 compresses video using standard DV compression, but records it in a manner that allows it to use standard Hi-8 tape. The result is a DV "box" that can also play standard Hi-8 and 8 mm tapes. On playback, analog tapes are converted to a 25 Mbps compressed signal available via the i-Link digital output interface. Playback from analog tapes has limited video quality. New recordings are digital and identical in performance to DV; audio specs and other data also are the same.

Digital Component Video
Digital video using three separate color components, such as Y‘CbCr or R‘G‘B‘.

Digital Composite Video
Digital video that is essentially the digitized waveform of NTSC or PAL video signals, with specific digital values assigned to the sync, blank, and white levels.

Digital Rights Management
Digital Rights Management (DRM) is a generic term for a number of capabilities that allow a content producer or distributor to determine under what conditions their product can be acquired, stored, viewed, copied, loaned, etc. Popular proprietary solutions include InterTrust, etc.

Digital Transmission Content Protection
An encryption method (also known as "5C") developed by Sony, Hitachi, Intel, Matsushita and Toshiba for IEEE 1394 interfaces.

Digital VCR
Digital VCRs are similar to analog VCRs in that tape is still used for storage. Instead of recording an analog audio/video signal, digital VCRs record digital signals, usually using compressed audio/video.

Digital Versatile Disc
See DVD - Video and DVD - Audio.

Digital Vertical Interval Timecode
DVITC digitizes the analog VITC waveform to generate 8-bit values. This allows the VITC to be used with digital video systems. For 525-line video systems, it is defined by SMPTE 266M. BT.1366 defines how to transfer VITC and LTC as ancillary data in digital component interfaces.

Digital Video Recorder
DVRs can be thought of a digital version of the VCR, with several enhancements. Instead of a tape, the DVR uses an internal hard disk to store compressed audio/video, and has the ability to record and playback at the same time. The main advantage that DVRs have over VCRs is their ability to time shift viewing the program as it is being recorded. This is accomplished by continuing to record the incoming live program, while retrieving the earlier part of the program that was just recorded. The DVR also offers pause, rewind, slow motion, and fast forward control, just as with a VCR.

Discrete Cosine Transform, DCT
A DCT is just another way to represent an image. Instead of looking at it in the time domain -- which, by the way, is how we normally do it -- it is viewed in the frequency domain. It‘s analogous to color spaces, where the color is still the color but is represented differently. Same thing applies here -- the image is still the image, but it is represented in a different way.

Why do JPEG, MPEG, H.261, and H.263 base part of their compression schemes on the DCT? Because it is more efficient to represent an image that way. In the same way that the Y‘CbCr color space is more efficient than RGB in representing an image, the DCT is even more efficient at image representation.

Discrete Time Oscillator (DTO)
A discrete time oscillator is a digital version of the voltage-controlled oscillator.

Dolby Digital
An audio compression technique developed by Dolby. It is a multi-channel surround sound format used in DVD and HDTV.

Dot Pitch
The distance between screen pixels measured in millimeters. The smaller the number, the better the horizontal resolution.

Double Buffering
As the name implies, you are using two buffers -- for video, this means two frame buffers. While buffer 1 is being read, buffer 2 is being written to. When finished, buffer 2 is read out while buffer 1 is being written to.

Downconverter
A circuit used to change a high-frequency signal to a lower frequency.

Downlink
The frequency satellites use to transmit data to Earth stations.

DRM
See Digital Rights Management.

Drop Field Scrambling
This method is identical to the sync suppression technique for scrambling analog TV channels, except there is no suppression of the horizontal blanking intervals. Sync pulse suppression only takes place during the vertical blanking interval. The descrambling pulses still go out for the horizontal blanking intervals (to fool unauthorized descrambling devices). If a descrambling device is triggering on descrambling pulses only, and does not know that the scrambler is using the drop field scrambling technique, it will try to reinsert the horizontal intervals (which were never suppressed). This is known as double reinsertion, which causes compression of the active video signal. An unauthorized descrambling device creates a washed-out picture and loss of neutral sync during drop field scrambling.

DTCP
Short for digital transmission content protection.

DTS
DTS stands for Digital Theater Systems. It is a multi-channel surround sound format, similar to Dolby Digital. For DVDs that use DTS audio, the DVD - Video specification still requires that PCM or Dolby Digital audio still be present. In this situation, only two channels of Dolby Digital audio may be present (due to bandwidth limitations).

DTV
Short for digital television, including SDTV, EDTV, and HDTV.

DVB
Short for digital video broadcast, a method of transmitting digital audio and video (SDTV or HDTV resolution), based on MPEG-2. There are several variations: DVB-T for terrestrial broadcasting (ETSI EN 300 744), DVB-S for satellite broadcasting (ETSI EN 300 421), and DVB-C for cable broadcasting (ETSI EN 300 429). Both MPEG-2 and Dolby Digital compressed audio are supported.

DVB-S uses the QPSK modulation system to guard against errors in satellite transmissions caused by reduced signal-to-noise ratio, with channel coding optimized to the error characteristics of the channel. A typical set of parameter values and 36 MHz transponder gives a useful data rate of around 38 Mbps.

DVB-C uses Quadrature Amplitude Modulation (QAM), which is optimized for maximum data rate since the cable environment is less prone to interference than satellite or terrestrial. Systems from 16-QAM up to 256-QAM can be used, but the system centers on 64-QAM, in which an 8 MHz channel can accommodate a physical payload of about 38 Mbps. The cable return path uses Quadrature Phase Shift Keying (QPSK) modulation in a 200 kHz, 1 MHz, or 2 MHz channel to provide a return path of up to about 3 Mbps. The path to the user may be either in-band (embedded in the MPEG-2 Transport Stream in the DVB-C channel) or out-of-band (on a separate 1 or 2 MHz frequency band).

DVD-Audio
DVDs that contain linear PCM audio data in any combination of 44.1, 48.0, 88.2, 96.0, 176.4, or 192 kHz sample rates, 16, 20, or 24 bits per sample, and 1 to 6 channels, subject to a maximum bit rate of 9.6 Mbps. With a 176.4 or 192 kHz sample rate, only two channels are allowed.

Meridian Lossless Packing (MLP) is a lossless compression method that has an approximate 2:1 compression ratio. The use of MLP is optional, but the decoding capability is mandatory on all DVD-Audio players.

Dolby Digital compressed audio is required for any video portion of a DVD-Audio disc.

DVD-Interactive
DVD-Interactive provides additional interactive capability for users by being able to access additional content on the DVD and/or Web sites on the Internet.

DVD-Video
DVDs that contain about two hours of digital audio, video, and data. The video is compressed and stored using MPEG-2 MP@ML. A variable bit rate is used, with an average of about 4 Mbps (video only), and a peak of 10 Mbps (audio and video). The audio is either linear PCM or Dolby Digital compressed audio. DTS compressed audio may also be used as an option.

Linear PCM audio can be sampled at 48 or 96 kHz, 16, 20, or 24 bits per sample, and 1 to 8 channels. The maximum bitrate is 6.144 Mbps, which limits sample rates and bit sizes in some cases.

For Dolby Digital audio, the bitrate is 64 to 448 kbps, with 384 kbps being the normal rate for 5.1 channels and 192 kbps being the normal rate for stereo. The channel combinations are (front/surround): 1/0, 1+1/0 (dual mono), 2/0, 3/0, 2/1, 3/1, 2/2, and 3/2. The LFE channel (0.1) is optional with all 8 combinations.

For DTS audio, the bitrate is 64 to 1,536 kbps. The channel combinations are (front/surround): 1/0, 2/0, 3/0, 2/1, 2/2, 3/2. The LFE channel (0.1) is optional with all 6 combinations.

Columbia Tristar Home Entertainment has introduced a Superbit(TM) DVD that has an average bit rate of about 7 Mbps (video only) for improved video quality. This is achieved by having minimal "extras" on the DVD.

DVI, DVI-D, DVI-I, DVI-A, DVI-CE
Abbreviation for Digital Visual Interface. This is a digital RGB video interface to a display. The EIA-861 standard specifies how to include data such as aspect ratio and format information. The VESA EEDID and DI-EXT standards document data structures and mechanisms to communicate data across DVI. Download the DVI specification.

DVI-D is a digital-only connector; a DVI-I connector handles both analog and digital. DVI-A is available as a plug (male) connector only and mates to the analog-only pins of a DVI-I connector. DVI-A is only used in adapter cables, where there is the need to convert to or from a traditional analog VGA signal.

DVI-CE (now known as HDMI) is a proposed modified version of DVI that is targeted for consumer equipment. It includes audio capability and uses a smaller connector.



DVITC
See digital vertical interval timecode.

DVR
See digital video recorder.

Dynamic Range
The weakest to the strongest signal a circuit will accept as input or generate as an output.

EDTV
See enhanced definition television.

EIA
Electronics Industries Alliance.

EIA-516
United States teletext standard, also called NABTS. Purchase the specification.

EIA-608
United States closed captioning and extended data services (XDS) standard. Revision B adds Copy Generation Management System - Analog (CGMS-A), content advisory (v-chip), Internet Uniform Resource Locators (URLs) using Text-2 (T-2) service, 16-bit Transmission Signal Identifier, and transmission of DTV PSIP data. Purchase the specification.

EIA/IS-702
NTSC Copy Generation Management System - Analog (CGMS-A). This standard added copy protection capabilities to NTSC video by extending the EIA-608 standard to control the Macrovision anti-copy process. It is now included in the latest EIA-608 standard, and has been withdrawn.

EIA-708
United States DTV closed captioning standard. Purchase the specification.

EIA-744
NTSC "v-chip" operation. This standard added content advisory filtering capabilities to NTSC video by extending the EIA-608 standard. It is now included in the latest EIA-608 standard, and has been withdrawn.

EIA-761
Specifies how to convert QAM to 8-VSB, with support for OSD (on screen displays). Purchase the specification.

EIA-762
Specifies how to convert QAM to 8-VSB, with no support for OSD (on screen displays). Purchase the specification.

EIA-766
United States HDTV content advisory standard. Purchase the specification.

EIA-770
This specification consists of three parts (EIA-770.1, EIA-770.2, and EIA-770.3). EIA-770.1 and EIA-770.2 define the analog YPbPr video interface for 525-line interlaced and progressive SDTV systems. EIA-770.3 defines the analog YPbPr video interface for interlaced and progressive HDTV systems. EIA-805 defines how transfer VBI data over these YPbPr video interfaces. Purchase the specification.

EIA-775
EIA-775 defines a specification for a baseband digital interface to a DTV using IEEE 1394 and provides a level of functionality that is similar to the analog system. It is designed to enable interoperability between a DTV and various types of consumer digital audio/video sources, including settop boxes and DVRs or VCRs.

EIA-775.1 adds mechanisms to allow a source of MPEG service to utilize the MPEG decoding and display capabilities in a DTV.

EIA-775.2 adds information on how a digital storage device, such as a D-VHS or hard disk digital recorder, may be used by the DTV or by another source device such as a cable set-top box to record or time-shift digital television signals. This standard supports the use of such storage devices by defining Service Selection Information (SSI), methods for managing discontinuities that occur during recording and playback, and rules for management of partial transport streams.

EIA-849 specifies profiles for various applications of the EIA-775 standard, including digital streams compliant with ATSC terrestrial broadcast, direct-broadcast satellite (DBS), OpenCable? and standard definition Digital Video (DV) camcorders. Purchase the specification.

EIA-805
This standard specifies how VBI data are carried on component video interfaces, as described in EIA-770.1 (for 480p signals only), EIA-770.2 (for 480p signals only) and EIA-770.3. This standard does not apply to signals which originate in 480i, as defined in EIA-770.1 and EIA-770.2. The first VBI service defined is Copy Generation Management System (CGMS) information, including signal format and data structure when carried by the VBI of standard definition progressive and high definition YPbPr type component video signals. It is also intended to be usable when the YPbPr signal is converted into other component video interfaces including RGB and VGA. Purchase the specification.

EIA-861
The EIA-861 standard specifies how to include data, such as aspect ratio and format information, on DVI and HDMI. Purchase the specification.

Enhanced Definition Television
EDTV is a television capable of displaying at least 480 progressive active scan lines. No aspect ratio is specified.

For the ATSC system, typical EDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
29.97p, 30p
59.94i, 60i
59.94p, 60p

640 x 480
4:3
x
x
  
x

720 x 360
16:9
x
x
  
x

720 x 480
4:3
x
x
  
x





For the DVB system, typical EDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
25p
29.97p, 30p
50i
50p
59.94i, 60i
59.94p, 60p

720 x 432
16:9
x
x
  
  
x
  
  

352 x 576
4:3
x
x
  
  
x
  
  

480 x 576
4:3
x
x
  
  
x
  
  

544 x 576
4:3
x
x
  
  
x
  
  

720 x 576
4:3
x
x
  
  
x
  
  

720 x 360
16:9
x
  
x
  
  
  
x

352 x 480
4:3
x
  
x
  
  
  
x

480 x 480
4:3
x
  
x
  
  
  
x

544 x 480
4:3
x
  
x
  
  
  
x

640 x 480
4:3
x
  
x
  
  
  
x

720 x 480
4:3
x
  
x
  
  
  
x





Other common EDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
25p
29.97p, 30p
50i
50p
59.94i, 60i
59.94p, 60p

960 x 480
16:9
x
  
x
  
  
  
x

960 x 576
16:9
x
x
  
  
x
  
  





p = progressive; i = interlaced.

EIA-J CPR-1204
This EIA-J recommendation specifies another widescreen signaling (WSS) standard for NTSC video signals. WSS may be present on 20 and 283. Purchase the specification.

Equalization Pulses
These are two groups of pulses, one that occurs before the serrated vertical sync and another group that occurs after. These pulses happen at twice the normal horizontal scan rate. They exist to ensure correct 2:1 interlacing in early televisions.

Error Concealment
The ability to hide transmission errors that corrupt the content beyond the ability of the receiver to properly display it. Techniques for video include replacing the corrupt region with either earlier video data, interpolated video data from previous and next frames, or interpolated data from neighboring areas within the current frame. Decoded MPEG video may also be processed using deblocking filters to reduce blocking artifacts. Techniques for audio include replacing the corrupt region with interpolated audio data.

Error Resilience
The ability to handle transmission errors without corrupting the content beyond the ability of the receiver to properly display it. MPEG-4 supports error resilience through the use of resynchronization markers, extended header code, data partitioning, and reversible VLCs.

ETSI EN 300 163
This specification defines NICAM 728 digital audio for PAL. Download the specification.

ETSI EN 300 294
Defines the widescreen signaling (WSS) information for PAL video signals. For (B, D, G, H, I) PAL systems, WSS may be present on line 23. Download the specification.

ETSI EN 300 421
This is the DVB-S specification. Download the specification.

ETSI EN 300 429
This is the DVB-C specification. Download the specification.

ETSI EN 300 744
This is the DVB-T specification. Download the specification.

ETSI EN 301 775
This is the specification for the carriage of Vertical Blanking Information (VBI) data in DVB bitstreams. Download the specification.

ETSI ETR 154
This specification defines the basic MPEG audio and video parameters for DVB applications. Download the specification.

ETSI ETS 300 231
This specification defines information sent during the vertical blanking interval using PAL teletext (ETSI ETS 300 706) to control VCRs in Europe (PDC). Download the specification.

ETSI ETS 300 706
This is the enhanced PAL teletext specification. Download the specification.

ETSI ETS 300 707
This specification covers Electronic Program Guides (EPG) sent using PAL teletext (ETSI ETS 300 706). Download the specification.

ETSI ETS 300 708
This specification defines data transmission using PAL teletext (ETSI ETS 300 706). Download the specification.

ETSI ETS 300 731
Defines the PALplus standard, allowing the transmission of 16:9 programs over normal PAL transmission systems. Download the specification.

ETSI ETS 300 732
Defines the ghost cancellation reference (GCR) signal for PAL. Download the specification.

ETSI ETS 300 743
This is the DVB subtitling specification. Download the specification.

Fade
Fading is a method of switching from one video source to another. Next time you watch a TV program (or a movie), pay extra attention when the scene is about to end and go on to another. The scene fades to black, then a fade from black to another scene occurs. Fading between scenes without going to black is called a dissolve. One way to do a fade is to use an alpha mixer.

Field
An interlaced display is made using two fields, each one containing half of the scan lines needed to make up one frame of video. Each field is displayed in its entirety -- therefore, the odd field is displayed, then the even, then the odd, and so on. Fields only exist for interlaced scanning systems. So for (M) NTSC, which has 525 lines per frame, a field has 262.5 lines, and two fields make up a 525-line frame.

Firewire
When Apple Computer initially developed IEEE 1394, they called it Firewire.

Flicker
Flicker occurs when the frame rate of the video is too low. It‘s the same effect produced by an old fluorescent light fixture. The two problems with flicker are that it‘s distracting and tiring to the eyes.

FM
See frequency modulation.

Frame
A frame of video is essentially one picture or "still" out of a video stream. By playing these individual frames fast enough, it looks like people are "moving" on the screen. It‘s the same principle as flip cards, cartoons, and movies.

Frame Buffer
A frame buffer is a memory used to hold an image for display. How much memory are we talking about? Well, let‘s assume a horizontal resolution of 640 pixels and 480 scan lines, and we‘ll use the RGB color space. This works out to be:


640 x 480 x 3 = 921,600 bytes or 900 KB

So, 900 KB are needed to store one frame of video at that resolution.

Frame Rate
The frame rate of a video source is how fast a new still image is available. For example, with the NTSC system, the entire display is repainted about once every 30th of a second for a frame rate of about 30 frames per second. For PAL, the frame rate is 25 frames per second. For computer displays, the frame rate is usually 70-90 frames per second.

Frame Rate Conversion
Frame rate conversion is the act of converting one frame rate to another.

Frequency Modulation (FM)
This technique sends data as frequency variations of a carrier signal.

Front Porch
This is the area of the video waveform that sits between the start of horizontal blanking and the start of horizontal sync.

Gamma
The transfer characteristics of most cameras and displays are nonlinear. For a display, a small change in amplitude when the signal level is small produces a change in the display brightness level, but the same change in amplitude at a high level will not produce the same magnitude of brightness change. This nonlinearity is known as gamma.

Gamma Correction
Before being displayed, linear RGB data must be processed (gamma corrected) to compensate for the nonlinearity of the display.

GCR
See ghost cancellation reference signal.

Genlock
A video signal provides all of the information necessary for a video decoder to reconstruct the picture. This includes brightness, color, and timing information. To properly decode the video signal, the video decoder must lock to all the timing information embedded within the video signal, including the color burst, horizontal sync, and vertical sync. The decoder looks at the color burst of the video signal and reconstructs the original color subcarrier that was used by the encoder. This is needed to properly decode the color information. It also generates a sample clock (done by looking at the sync information within the video signal), used to clock pixel data out of the decoder into a memory or another circuit for processing. The circuitry within the decoder that does all of this work is called the genlock circuit. Although it sounds simple, the genlock circuit must be able to handle very bad video sources, such as the output of VCRs, cameras, and toys. In reality, the genlock circuit is the most complex section of a video decoder.

Ghost Cancellation Reference
A reference signal on (M) NTSC scan lines 19 and 282 and (B, D, G, H, I) PAL scan line 318 that allows the removal of ghosting from TVs. Filtering is employed to process the transmitted GCR signal and determine how to filter the entire video signal to remove the ghosting. ITU-R BT.1124 and ETSI ETS 300 732 define the standard each country uses. ATSC A/49 also defines the standard for NTSC.

Gray Scale
The term gray scale has several meanings. It some cases, it means the luma component of color video signals. In other cases, it means a black-and-white video signal.

H.261, H.263
The ITU-T H.261 and H.263 video compression standards were developed to implement video conferencing over ISDN, LANs, regular phone lines, etc. H.261 supports video resolutions of 352 x 288 and 176 x 144 at up to 29.97 frames per second. H.263 supports video resolutions of 1408 x 1152, 704 x 576, 352 x 288, 176 x 144, and 128 x 96 at up to 29.97 frames per second.

H.264, H.26L
The next-generation video codec, H.264 has been a university research project until recently. Previously known as "H.26L", "JVT", and "AVC" (advanced video codec), it is now being worked on by MPEG, with the intention of making it part 10 of the MPEG-4 standard.

H.264 offers bit rates up to 50% less than the MPEG-4 advanced simple profile (ASP) video codec for the same video quality. It is designed to compete with Microsoft‘s WMT v9 technology in bit rate and quality.

HD-CIF
See CIF.

HD-SDTI
High Data-Rate Serial Data Transport Interface, defined by SMPTE 348M.

HDMI
Abbreviation for High Definition Multimedia Interface, a single-cable digital audio/video interface for consumer equipment. It is designed to replace DVI in a backwards compatible fashion and supports EIA-861 and HDCP.

Digital RGB or YCbCr data at rates up to 5 Gbps are supported (HDTV requires 2.2 Gbps). Up to 8 channels of 32-192 kHz digital audio are also supported, along with AV.link (remote control) capability and a smaller 15mm 19-pin connector.

HDTV
Short for High Definition Television. HDTV is capable of displaying at least 720 progressive or 1080 interlaced active scan lines. It must be capable of displaying a 16:9 image using at least 540 progressive or 810 interlaced active scan lines.

For the ATSC system, typical HDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
29.97p, 30p
59.94i, 60i
59.94p, 60p

1280 x 720
16:9
x
x
  
x

1920 x 1080
16:9
x
x
x
  




For the DVB system, typical HDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
25p
29.97p, 30p
50i
50p
59.94i, 60i
59.94p, 60p

1280 x 720
16:9
x
x
x
x
x
x
x

1440 x 1080
16:9
x
x
x
x
x
x
x

1920 x 1080
16:9
x
x
x
x
x
x
x




p = progressive; i = interlaced.

High Definition Television
See HDTV.

Highpass Filter
A circuit that passes frequencies above a specific frequency (the cutoff frequency). Frequencies below the cutoff frequency are reduced in amplitude to eliminate them.

Horizontal Blanking
During the horizontal blanking interval, the video signal is at the blank level so as not to display the electron beam when it sweeps back from the right to the left side of the CRT screen.

Horizontal Resolution
See resolution.

Horizontal Scan Rate
This is how fast the scanning beam in a display sweeps from side to side. In the NTSC system, this rate is 63.556 ms, or 15.734 kHz. That means the scanning beam moves from side to side 15,734 times a second.

Horizontal Sync
This is the portion of the video signal that tells the display where to place the image in the left-to-right dimension. The horizontal sync pulse tells the receiving system where the beginning of the new scan line is.

House Sync
This is another name for black burst.

HSI
HSI stands for Hue, Saturation and Intensity. HSI is based on polar coordinates, while the RGB color space is based on a three-dimensional Cartesian coordinate system. The intensity, analogous to luma, is the vertical axis of the polar system. The hue is the angle and the saturation is the distance out from the axis. HSI is more intuitive to manipulate colors as opposed to the RGB space. For example, in the HSI space, if you want to change red to pink, you decrease the saturation. In the RGB space, what would you do? My point exactly. In the HSI space, if you wanted to change the color from purple to green, you would adjust the hue. Take a guess what you would have to do in the RGB space. However, the key thing to remember, as with all color spaces, is that it‘s just a way to represent a color-nothing more, nothing less.

HSL
This is similar to HSI, except that HSL stands for Hue, Saturation and Lightness.

HSV
This is similar to HSI, except that HSV stands for Hue, Saturation and Value.

HSYNC
Check out the horizontal sync definition.

Hue
In technical terms, hue refers to the wavelength of the color. That means that hue is the term used for the base color -- red, green, yellow, etc. Hue is completely separate from the intensity or the saturation of the color. For example, a red hue could look brown at low saturation, bright red at a higher level of saturation, or pink at a high brightness level. All three "colors" have the same hue.

Huffman Coding
Huffman coding is a method of data compression. It doesn‘t matter what the data is -- it could be image data, audio data, or whatever. It just so happens that Huffman coding is one of the techniques used in JPEG, MPEG, H.261, and H.263 to help with the compression. This is how it works. First, take a look at the data that needs to be compressed and create a table that lists how many times each piece of unique data occurs. Now assign a very small code word to the piece of data that occurs most frequently. The next largest code word is assigned to the piece of data that occurs next most frequently. This continues until all of the unique pieces of data are assigned unique code words of varying lengths. The idea is that data that occurs most frequently is assigned a small code word, and data that rarely occurs is assigned a long code word, resulting in space savings.

IDTV
See improved definition television.

IEC 60461
Defines the longitudinal (LTC) and vertical interval (VITC) timecode for NTSC and PAL video systems. LTC requires an entire field time to transfer timecode information, using a separate track. VITC uses one scan line each field during the vertical blanking interval. Purchase the specification. Also see SMPTE 12M.

IEC 60958
Defines a serial digital audio interface for consumer (SPDIF) and professional applications. Purchase the specification.

IEC 61834
Defines the DV (originally the "Blue Book") standard. Purchase the specification. Also see SMPTE 314M.

IEC 61880
Defines the widescreen signaling (WSS) information for NTSC video signals. WSS may be present on lines 20 and 283. Purchase the specification.

IEC 61883
Defines the methods for transferring data, audio, DV and MPEG-2 data over IEEE 1394. Purchase the specification.

IEC 62107
Defines the Super VideoCD standard. Purchase the specification.

IEEE 1394
A high-speed "daisy-chained" serial interface. Digital audio, video, and data can be transferred with either a guaranteed bandwidth or a guaranteed latency. It is hot-pluggable, and uses a small 6-pin or 4-pin connector, with the 6-pin connector providing power.

iLink
Sony‘s name for their IEEE 1394 interface.

Illegal Video
Some colors that exist in the RGB color space can‘t be represented in the NTSC and PAL video domain. For example, 100% saturated red in the RGB space (which is the red color on full strength and the blue and green colors turned off) can‘t exist in the NTSC video signal, due to color bandwidth limitations. The NTSC encoder must be able to determine that an illegal color is being generated and stop that from occurring, since it may cause over-saturation and blooming.

Image Buffer
For all practical purposes, an image buffer is the same as a frame buffer. An image is acquired and stored in the image buffer. Once it is in the image buffer, it can typically be annotated with text or graphics or manipulated in some way, just like anything else in a frame buffer.

Image Compression
Image compression is used to reduce the amount of memory required to store an image. For example, an image that has a resolution of 640 x 480 and is in the RGB color space at 8 bits per color, requiring 900 KB of storage. If this image can be compressed at a compression ratio of 20:1, then the amount of storage required is only 45 KB. There are several methods of image compression, but the most popular are JPEG and MPEG. H.261 and H.263 are the video compression standards used for video conferencing.

Improved Definition Television
IDTV is different from HDTV. IDTV is a system that improves the display on TVs by adding processing in the TV; standard NTSC or PAL signals are transmitted.

Intensity
This is the same thing as brightness.

Intercast
A method developed by Intel for transmitting web pages during the vertical blanking interval of a NTSC or PAL video signal. It is based on NABTS for (M) NTSC systems.

Interlaced
An interlaced video system is one where two interleaved fields are used to generate one video frame. Therefore, the number of lines in a field is one-half of the number of lines in a frame. In NTSC, there are 262.5 lines per field (525 lines per frame), while there are 312.5 lines per field (625 lines per frame) in PAL. Each field is drawn on the screen consecutively -- first one field, then the other.

Interpolation
Interpolation is a mathematical way of generating additional information. Let‘s say that an image needs to be scaled up by a factor of two, from 100 samples to 200 samples. The "missing" samples are generated by calculating (interpolating) new samples between two existing samples. After all of the "missing" samples have been generated -- presto! -- 200 samples exist where only 100 existed before, and the image is twice as big as it used to be.

IRE Unit
An arbitrary unit used to describe the amplitude characteristics of a video signal. White is defined to be 100 IRE and the blanking level is defined to be 0 IRE.

ISMA
Abbreviation for the Internet Streaming Media Alliance. ISMA is a group of industry leaders in content management, distribution infrastructure and media streaming working together to promote open standards for developing end-to-end media streaming solutions. The ISMA specification defines the exact features of the MPEG-4 standard that have to be implemented on the server, client and intermediate components to ensure interoperability between the entire streaming workflow. Similarly, it also defines the exact features and the selected formats of the RTP, RTSP, and SDP standards that have to be implemented.

The ISMA v1.0 specification defines two hierarchical profiles. Profile 0 is aimed to stream audio/video content on wireless and narrowband networks to low-complexity devices, such as cell phones or PDAs, that have limited viewing and audio capabilities. Profile 1 is aimed to stream content over broadband-quality networks to provide the end user with a richer viewing experience. Profile 1 is targeted to more powerful devices, such as set-top boxes and personal computers.

ITU-R BT.xxx
See BT.xxx.

Jitter
Short-term variations in the characteristics (such as frequency, amplitude, etc.) of a signal.

JPEG
JPEG stands for Joint Photographic Experts Group. However, what people usually mean when they use the term "JPEG" is the image compression standard they developed. JPEG was developed to compress still images, such as photographs, a single video frame, something scanned into the computer, and so forth. You can run JPEG at any speed that the application requires. For a still picture database, the algorithm doesn‘t have to be very fast. If you run JPEG fast enough, you can compress motion video -- which means that JPEG would have to run at 50 or 60 fields per second. This is called motion JPEG or M-JPEG. You might want to do this if you were designing a video editing system. Now, M-JPEG running at 60 fields per second is not as efficient as MPEG-2 running at 60 fields per second because MPEG was designed to take advantage of certain aspects of motion video.

Line-Locked Clock
A design that ensures that there is always a constant number of samples per scan line, even if the timing of the line changes.

Line Store
A line store is a memory used to hold one scan line of video. If the horizontal resolution of the active display is 640 samples and RGB is used as the color space, the line store would have to be 640 locations long by 3 bytes wide. This amounts to one location for each sample and each color. Line stores are typically used in filtering algorithms. For example, a comb filter is made up of one or more line stores.

Linearity
Linearity is a basic measurement of how well an ADC or DAC is performing. Linearity is typically measured by making the ADC or DAC attempt to generate a linearly increasing signal. The actual output is compared to the ideal of the output. The difference is a measure of the linearity. The smaller the number, the better. Linearity is typically specified as a range or percentage of LSBs (Least Significant Bits).

Locked
When a PLL is accurately producing timing that is precisely lined up with the timing of the incoming video source, the PLL is said to be "locked". When a PLL is locked, the PLL is stable and there is minimum jitter in the generated sample clock.

Longitudinal Timecode
Timecode information is stored on a separate track from the video, requiring an entire field time to store or read it.

Lossless
Lossless is a term used with image compression. Lossless image compression is when the decompressed image is exactly the same as the original image. It‘s lossless because you haven‘t lost anything.

Lossy
Lossy image compression is the exact opposite of lossless. The regenerated image is different from the original image. The differences may or may not be noticeable, but if the two images are not identical, the compression was lossy.

Lowpass Filter
A circuit that passes frequencies below a specific frequency (the cutoff frequency). Frequencies above the cutoff frequency are reduced in amplitude to eliminate them.

LTC
See longitudinal timecode.

Luma
As mentioned in the definition of chroma, the NTSC and PAL video systems use a signal that has two pieces: the black and white part, and the color part. The black and white part is the luma. It was the luma component that allowed color TV broadcasts to be received by black and white TVs and still remain viewable.

Luminance
In video, the terms luminance and luma are commonly (and incorrectly) interchanged. See the definition of luma.

MESECAM
A technique of recording SECAM video. Instead of dividing the FM color subcarrier by four and then multiplying back up on playback, MESECAM uses the same heterodyne conversion as PAL.

M-JPEG
See motion JPEG.

Modulator
A modulator is basically a circuit that combines two different signals in such a way that they can be pulled apart later. What does this have to do with video? Let‘s take the NTSC system as an example, although the example applies equally as well to PAL. The NTSC system may use the Y‘IQ or Y‘UV color space, with the I and Q or U and V signals containing all of the color information for the picture. Two 3.58-MHz color subcarriers (90 degrees out of phase) are modulated by the I and Q or U and V components and added together to create the chroma part of the NTSC video.

Moire
This is a type of image artifact. A moire effect occurs when a pattern is created on the display where there really shouldn‘t be one. A moire pattern is typically generated when two different frequencies beat together to create a new, unwanted frequency.

Monochrome
A monochrome signal is a video source having only one component. Although usually meant to be the luma (or black-and-white) video signal, the red video signal coming into the back of a computer display is monochrome because it only has one component.

Monotonic
This is a term that is used to describe ADCs and DACs. An ADC or DAC is said to be monotonic if for every increase in input signal, the output increases. Any ADC or DAC that is nonmonotonic -- meaning that the output decreases for an increase in input -- is bad! Nobody wants a nonmonotonic ADC or DAC.

Motion Estimation
Motion estimation is trying to figure out where an object has moved to from one video frame to the other. Why would you want to do that? Well, let‘s take an example of a video source showing a ball flying through the air. The background is a solid color that is different from the color of the ball. In one video frame the ball is at one location and in the next video frame the ball has moved up and to the right by some amount. Now let‘s assume that the video camera has just sent the first video frame of the series. Now, instead of sending the second frame, wouldn‘t it be more efficient to send only the position of the ball? Nothing else moves, so only two little numbers would have to be sent. This is the essence of motion estimation. By the way, motion estimation is an integral part of MPEG, H.261, and H.263.

Motion JPEG
JPEG compression or decompression that is applied real-time to video. Each field or frame of video is individually processed.

MPEG
MPEG stands for Moving Picture Experts Group. This is an ISO/IEC (International Standards Organization) body that is developing various compression algorithms. MPEG differs from JPEG in that MPEG takes advantage of the redundancy on a frame-to-frame basis of a motion video sequence, whereas JPEG does not.

MPEG-1
MPEG-1 (ISO/IEC 11172) was the first MPEG standard defining the compression format for real-time audio and video. The video resolution is typically 352 x 240 or 352 x 288, although higher resolutions are supported. The maximum bitrate is about 1.5 Mbps. MPEG-1 is used for the Video CD format.

MPEG-2
MPEG-2 (ISO/IEC 13818) extends the MPEG-1 standard to cover a wider range of applications. Higher video resolutions are supported to allow for HDTV applications, both progressive and interlaced video are supported. MPEG-2 is used for the DVD - Video and SVCD formats, and also forms the basis for digital SDTV and HDTV.

MPEG-3
MPEG-3 was originally targeted for HDTV applications. This was incorporated into MPEG-2, so there is no MPEG-3 standard.

MPEG-4
MPEG-4 (ISO/IEC 14496) supports an object-based approach, where scenes are modeled as compositions of objects, both natural and synthetic, with which the user may interact. Visual objects in a scene can be described mathematically and given a position in a two- or three-dimensional space. Similarly, audio objects can be placed in a sound space. Thus, the video or audio object need only be defined once; the viewer can change his viewing position, and the calculations to update the audio and video are done locally. Classical "rectangular" video, as from a camera, is one of the visual objects supported. In addition, there is the ability to map images onto computer-generated shapes, and a text-to-speech interface.

Although well-known as a low bitrate, low resolution solution for wireless devices, MPEG-4 also supports HDTV resolutions and studio applications.

MPEG-4 offers bit rates of about one-half those used for MPEG-2 of similar video quality. "DVD quality" is achievable at about 1.5-2 Mbps, with "HDTV quality" at about 7 Mbps. Thus, a 6 MHz cable channel can support up to 24 SDTV channels of MPEG-4 content instead of 12 channels of MPEG-2 content.

H.26L, a next-generation video codec, is also being worked on, with the intent of it being part 10 of the MPEG-4 standard.




MPEG-7
MPEG-7 standardizes the description of multimedia material (referred to as metadata), such as still pictures, audio, and video, regardless if locally stored, in a remote database, or broadcast. Examples are finding a scene in a movie, finding a song in a database, or selecting a broadcast channel. The searcher for an image can use a sketch or a general description. Music can be found using a "query by humming" format.

MTS
Multichannel Television Sound. A generic name for various stereo audio implementations, such as BTSC and Zweiton.

Multipass Encoding
True multipass encoding is currently available only for WM8 and MPEG-2. An encoder supporting multipass will, in a first pass, analyze the video stream to be encoded and write down a log about everything it encounters. Let‘s assume we have a short clip that starts out in a dialog scene where we have few cuts and the camera remains static. Then it leads over to a karate fight with lots of fast cuts and a lot of action (people flying through the air, kicking, punching, etc.). In regular CBR, encoding every second gets more or less bitrate (it‘s hard to stay 100% CBR but that‘s a detail) whereas in multipass VBR mode the encoder will use the bitrate according to his knowledge about the video stream, i.e. the dialog part gets the available bitrate and the fighting scene gets allotted more bitrate. The more passes, the more refined the bitrate distribution will be. In single pass VBR, the encoder has to base his decisions on where to use how much bitrate solely on the knowledge of the stuff it previously has encoded.

NABTS
North American Broadcast Teletext Specification (EIA-516). This is also ITU-R BT.653 525-line system C teletext. However, the NABTS specification goes into much more detail.

NexTView
An electronic program guide (EPG) based on ETSI ETS 300 707.

NICAM 728
A technique of implementing digital stereo audio for PAL video using another audio subcarrier. The bit rate is 728 kbps. It is discussed in BS.707 and ETSI EN 300 163. NICAM 728 is also used to transmit non-audio digital data in China.

Noninterlaced
This is a method of scanning out a video display that is the total opposite of interlaced. All of the lines in the frame are scanned out sequentially, one right after the other. The term "field" does not apply in a noninterlaced system. Another term for a noninterlaced system is progressive scan.

NTSC
Never Twice the Same Color, Never The Same Color, or National Television Standards Committee, depending on who you‘re talking to. Technically, NTSC is just a color modulation scheme. To fully specify the color video signal, it should be referred to as (M) NTSC. "NTSC" is also commonly (though incorrectly) used to refer to any 525/59.94 or 525/60 video system. See also NTSC 4.43.

NTSC 4.43
This is a NTSC video signal that uses the PAL color subcarrier frequency (about 4.43 MHz). It was developed by Sony in the 1970s to more easily adapt European receivers to accept NTSC signals.

nVOD
Abbreviation for near-video-on-demand. See video-on-demand.

OIRT
Organisation Internationale de Radiodiffision-Television.

Open Subtitles
See subtitles.

Oversampled VBI Data
See raw VBI data.

Overscan
When an image is displayed, it is "overscanned" if a small portion of the image extends beyond the edges of the screen. Overscan is common in TVs that use CRTs to allow for aging and variations in components, temperature and power supply.

PAL
PAL stands for Phase Alternation Line, Picture Always Lousy, or Perfect At Last depending on your viewpoint. Technically, PAL is just a color modulation scheme. To fully specify the color video signal it should be referred to as (B, D, G, H, I, M, N, or CN) PAL. (B, D, G, H, I) PAL is the color video standard used in Europe and many other countries. (M, N, CN) PAL is also used in a few places, but is not as popular. "PAL" is also commonly (though incorrectly) used to refer to any 625/50 video system. See also PAL 60.

PAL 60
This is a NTSC video signal that uses the PAL color subcarrier frequency (about 4.43 MHz) and PAL-type color modulation. It is a further adaptation of NTSC 4.43, modifying the color modulation in addition to changing the color subcarrier frequency. It was developed by JVC in the 1980s for use with their video disc players, hence the early name of "Disk-PAL".

There is a little-used variation, also called PAL 60, which is a PAL video signal that uses the NTSC color subcarrier frequency (about 3.58 MHz), and PAL-type color modulation.

PALplus
PALplus is 16:9 aspect ratio version of PAL, designed to be transmitted using normal PAL systems. 16:9 TVs without the PALplus decoder, and standard TVs, show a standard picture. It is defined by BT.1197 and ETSI ETS 300 731.

PDC
See program delivery control.

Pedestal
Pedestal is an offset used to separate the black level from the blanking level by a small amount. When a video system doesn‘t use a pedestal, the black and blanking levels are the same. (M) NTSC uses a pedestal, (B, D, G, H, I) PAL does not. (M) NTSC-J used in Japan also does not use a pedestal.

Phase Adjust
This is a term used to describe a method of adjusting the hue in a NTSC video signal. The phase of the color subcarrier is moved, or adjusted, relative to the color burst. PAL and SECAM systems do not usually have a phase (or hue) adjust control.

Pixel
A pixel, which is short for picture element, is the smallest sample that makes up a scan line. For example, when the horizontal resolution is defined as 640 pixels, that means that there are 640 individual locations, or samples, that make up the visible portion of each horizontal scan line. Pixels may be square or rectangular.

Pixel Clock
The pixel clock is used to divide the horizontal line of video into samples. The pixel clock has to be stable (a very small amount of jitter) relative to the video or the image will not be stored correctly. The higher the frequency of the pixel clock, the more samples per line there are.

Pixel Drop Out
This can be a real troublemaker, since it can cause artifacts. In some instances, a pixel drop out looks like black spots on the screen, either stationary or moving around. Several things can cause pixel drop out, such as the ADC not digitizing the video correctly. Also, the timing between the ADC and the frame buffer might not be correct, causing the wrong number to be stored in memory. For that matter, the timing anywhere in the video stream might cause a pixel drop out.

Primary Colors
A set of colors that can be combined to produce any desired set of intermediate colors, within a limitation call the "gamut". The primary colors for color television are red, green, and blue. The exact red, green, and blue colors used are dependent on the television standard.

Program Delivery Control
Information sent during the vertical blanking interval using teletext to control VCRs in Europe. The specification is ETSI ETS 300 231.

Progressive Scan
See noninterlaced.

Pseudo Color
Pseudo color is a term used to describe a technique that applies color, or shows color, where it does not really exist. We are all familiar with the satellite photos that show temperature differences across a continent or the multicolored cloud motion sequences on the nightly weather report. These are real-world examples of pseudo color. The color does not really exist. A computer uses a lookup table memory to add the color so information, such as temperature or cloud height, is viewable.

Px64
This is basically the same as H.261. The term is starting to fade away since H.261 is used in applications other than ISDN video conferencing.

QAM
See quadrature amplitude modulation.

QCIF
Quarter Common Interface Format. This video format was developed to allow the implementation of cheaper video phones. The QCIF format has a resolution of 176 x 144 active pixels and a refresh rate of 29.97 frames per second.

QSIF
Quarter Standard Interface Format. The computer industry, which uses square pixels, has defined QSIF to be 160 x 120 active pixels, with a refresh rate of whatever the computer is capable of supporting.

Quad Chroma
Quad chroma refers to a technique where the sample clock is four times the frequency of the color burst. For NTSC this means that the sample clock is about 14.32 MHz (4 x 3.579545 MHz), while for PAL the sample clock is about 17.73 MHz (4 x 4.43361875 MHz). The reason these are popular sample clock frequencies is that, depending on the method chosen, they make the chrominance (color) decoding and encoding easier.

Quadrature Amplitude Modulation
A method of encoding digital data onto a carrier for RF transmission. QAM is typically used for cable transmission of digital SDTV and HDTV signals. DVB-C supports 16-QAM, 32-QAM, 64-QAM, 128-QAM, and 256-QAM, although receivers need only support up to 64-QAM.

Quadrature Modulation
The modulation of two carrier components, which are 90 degrees apart in phase.

Quantization
The process of converting a continuous analog signal into a set of discrete levels (digitizing).

Quantization Noise
This is the inherent uncertainty introduced during quantization since only discrete, rather than continuous, levels are generated. Also called quantization distortion.

Raster
Essentially, a raster is the series of scan lines that make up a picture. You may from time to time hear the term raster line -- it‘s the same as scan line. All of the scan lines that make up a frame of video form a raster.

Raw VBI Data
A technique where VBI data (such as teletext and captioning data) is sampled by a fast sample clock (i.e. 27 MHz), and output. This technique allows software decoding of the VBI data to be done.

RC Time Code
Rewritable time code, used in consumer video products.

Rectangular Pixels
Pixels that are not "square pixels" are "rectangular pixels".

Real-Time Control Protocol
See RTCP.

Real-Time Transport Protocol
See RTP.

Real-Time Streaming Protocol
See RTSP.

Residual Subcarrier
This is the amount of color subcarrier information present in white, gray, or black areas of a composite color video signal (ideally, there is none present). The number usually appears as -n dB. The larger "n" is, the better.

Resolution
This is the basic measurement of how much information is visible for an image. It is usually described as "h" x "v". The "h" is the horizontal resolution (across the display) and the "v" is the vertical resolution (down the display). The higher the numbers, the better, since that means there is more detail to see. If only one number is specified, it is the horizontal resolution.

Displays specify the maximum resolution they can handle, determined by the display technology and the electronics used. The actual resolution will be the resolution of either the source or the display, whichever is lower.

Vertical resolution is the number of white-to-black and black-to-white transitions that can be seen from the top to the bottom of the picture. The maximum number is the number of active scan lines used by the image. The actual vertical resolution may be less due to processing, interlacing, overscanning, or limited by the source.

Horizontal resolution is the number of white-to-black and black-to-white transitions that can be seen from the left to the right of the picture. For digital displays, the maximum number is the number of active pixels used by a scan line. For both analog and digital displays, the actual horizontal resolution may be less due to processing, overscanning, or limited by the source.

Resource Reservation Protocol
See RSVP.

Retrace
Retrace is what the electron beam does when it gets to the right-hand edge of the CRT display to get back to the left-hand edge. Retrace happens during the horizontal blanking time.

RGB
Abbreviation for red, green, blue.

RS-170, RS-170A
RS-170 is the United States standard that was used for black-and-white TV, and defines voltage levels, blanking times, the width of the sync pulses, and so forth. The specification spells out everything required for a receiver to display a monochrome picture. Now, SMPTE 170M is essentially the same specification, modified for color TV by adding the color components. They modified RS-170 just a tiny little bit so that color could be added (RS-170A), with the final result being SMPTE 170M for NTSC. This tiny little change was so small that the existing black-and-white TVs didn‘t even notice it.

RS-343
RS-343 does the same thing as RS-170, defining a specification for transferring analog video, but the difference is that RS-343 is for high-resolution computer graphics analog video, while RS-170 is for TV-resolution NTSC analog video.

RSDL
RSDL stands for Reverse Spiral Dual Layer. It is a storage method that uses two layers of information on one side of a DVD. For movies that are longer than can be recorded on one layer, the disc stops spinning, reverses direction, and begins playing from the next layer.

RSVP
RSVP (Resource Reservation Protocol) is a control protocol that allows a receiver to request a specific quality of service level over an IP network. Real-time applications, such as streaming video, use RSVP to reserve necessary resources at routers along the transmission paths so that the requested bandwidth can be available when the transmission actually occurs.

RTCP
RTCP (Real-Time Control Protocol) is a control protocol designed to work in conjunction with RTP. During a RTP session, participants periodically send RTCP packets to convey status on quality of service and membership management. RTCP also uses RSVP to reserve resources to guarantee a given quality of service.

RTP
RTP (Real-Time Transport Protocol) is a packet format and protocol for the transport of real-time audio and video data over an IP network. The data may be any file format, including MPEG-2, MPEG-4, ASF, QuickTime, etc. Implementing time reconstruction, loss detection, security and content identification, it also supports multicasting (one source to many receivers) and unicasting (one source to one receiver) of real-time audio and video. One-way transport (such as video-on-demand) as well as interactive services (such as Internet telephony) are supported. RTP is designed to work in conjunction with RTCP.

RTSP
RTSP (Real-Time Streaming Protocol) is a client-server protocol to enable controlled delivery of streaming audio and video over an IP network. It provides "VCR-style" remote control capabilities such as play, pause, fast forward, and reverse. The actual data delivery is done using RTP.

Run Length Coding
Run length coding is a type of data compression. Let‘s say that this page is wide enough to hold a line of 80 characters. Now, imagine a line that is almost blank except for a few words. It‘s 80 characters long, but it‘s just about all blanks -- let‘s say 50 blanks between the words "coding" and "medium". These 50 blanks could be stored as 50 individual codes, but that would take up 50 bytes of storage. An alternative would be to define a special code that said a string of blanks is coming and the next number is the amount of blanks in the string. So, using our example, we would need only 2 bytes to store the string of 50 blanks, the first special code byte followed by the number 50. We compressed the data; 50 bytes down to 2. This is a compression ration of 25:1. Not bad, except that we only compressed one line out of the entire document, so we should expect that the total compression ratio would be much less.

Run length coding all by itself as applied to images is not as efficient as using a DCT for compression, since long runs of the same "number" rarely exist in real-world images. The only advantage of run length coding over the DCT is that it is easier to implement. Even though run length coding by itself is not efficient for compressing images, it is still used as part of the JPEG, MPEG, H.261, and H.263 compression schemes.

R‘-Y‘
In video, the red-minus-luma signal, also called a color difference signal. When added to the luma (Y‘) signal, it produces the red video signal.

SABC
South Africa Broadcasting Corporation.

Sample
To obtain values of a signal at periodic intervals. Also the value of a signal at a given moment in time.

Sample and Hold
A circuit that samples a signal and holds the value until the next sample is taken.

Sample Rate
Sample rate is how often a sample of a signal is taken. The sample rate is determined by the sample clock.

SAP
See secondary audio program.

Saturation
Saturation is the amount of color present. For example, a lightly saturated red looks pink, while a fully saturated red looks like the color of a red crayon. Saturation does not mean the brightness of the color, just how much "pigment" is used to make the color. The less "pigment", the less saturated the color is, effectively adding white to the pure color.

Scaling
Scaling is the act of changing the resolution of an image. For example, scaling a 640 x 480 image by one-half results in a 320 x 240 image. Scaling by 2x results in an image that is 1280 x 960. There are many different methods for image scaling, and some "look" better than others. In general, though, the better the algorithm "looks", the more expensive it is to implement.

Scan Line
A scan line is an individual line across the display. It takes 525 of these scan lines to make up a NTSC TV picture and 625 scan lines to make up a PAL TV picture.

Scan Velocity Modulation
See velocity scan modulation.

SCART
Syndicat des Constructeurs d‘Appareils Radio Recepteurs et Televiseurs. This is a 21-pin connector supported by many consumer video components in Europe. It allows mono or stereo audio and composite, s-video, or RGB video to be transmitted between equipment.

SDI
Serial Digital I/O. Another name for the 270 Mbps or 360 Mbps serial interface defined by BT.656. It is used primarily on professional and studio video equipment.

SDTI
Serial Data Transport Interface, defined by SMPTE 305M.

SDTV
Short for Standard Definition Television. SDTV is a television that displays less active vertical resolution than EDTV. No aspect ratio is specified.

For the ATSC system, typical SDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
29.97p, 30p
59.94i, 60i
59.94p, 60p

640 x 480
4:3
  
  
x
  

720 x 360
16:9
  
  
x
  

720 x 480
4:3
  
  
x
  





For the DVB system, typical SDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
25p
29.97p, 30p
50i
50p
59.94i, 60i
59.94p, 60p

352 x 288
4:3
x
x
  
x
x
  
  

720 x 432
16:9
  
  
  
x
  
  
  

352 x 576
4:3
  
  
  
x
  
  
  

480 x 576
4:3
  
  
  
x
  
  
  

544 x 576
4:3
  
  
  
x
  
  
  

720 x 576
4:3
  
  
  
x
  
  
  

352 x 240
4:3
x
  
x
  
  
x
x

720 x 360
16:9
  
  
  
  
  
x
  

352 x 480
4:3
  
  
  
  
  
x
  

480 x 480
4:3
  
  
  
  
  
x
  

544 x 480
4:3
  
  
  
  
  
x
  

640 x 480
4:3
  
  
  
  
  
x
  

720 x 480
4:3
  
  
  
  
  
x
  





Other common SDTV (luminance) resolutions and refresh rates are:

Active Resolution
Aspect Ratio
23.976p, 24p
25p
29.97p, 30p
50i
50p
59.94i, 60i
59.94p, 60p

960 x 480
16:9
  
  
  
  
  
x
  

960 x 576
16:9
  
  
  
x
  
  
  




p = progressive; i = interlaced.

SECAM
This is another color video format similar to PAL. The major differences between the two are that in SECAM the chroma is FM modulated and the R‘-Y‘ and B‘-Y‘ signals are transmitted line sequentially. SECAM stands for Sequentiel Couleur Avec Memoire or Sequential Color with Memory.

Secondary Audio Program (SAP)
Generally used to transmit audio in a second language.

Serration Pulses
These are pulses that occur during the vertical sync interval of NTSC, PAL, and SECAM, at twice the normal horizontal scan rate. The reason these exist was to ensure correct 2:1 interlacing in early televisions and eliminate DC offset buildup.

Setup
Setup is the same thing as pedestal.

SIF
Standard (or Source) Input Format. This video format was developed to allow the storage and transmission of digital video. The 625/50 SIF format has a resolution of 352 x 288 active pixels and a refresh rate of 25 frames per second. The 525/60 SIF format has a resolution of 352 x 240 active pixels and a refresh rate of 30 frames per second. Note that MPEG-1 allows resolutions up to 4095 x 4095 active pixels; however, there is a "constrained subset" of parameters defined as SIF. The computer industry, which uses square pixels, has defined SIF to be 320 x 240 active pixels, with a refresh rate of whatever the computer is capable of supporting.

Signal-to-Noise Ratio (SNR)
Signal-to-noise ratio is the magnitude of the signal divided by the amount of unwanted stuff that is interfering with the signal (the noise). SNR is usually described in decibels, or "dB", for short; the bigger the number, the better looking the picture.

Silent Radio
Silent Radio is a service that feeds data that is often seen in hotels and nightclubs. It‘s usually a large red sign that shows current news, events, scores, etc. It is present on NTSC lines 10- 11 and 273-274, and uses encoding similar to EIA-608.

Sliced VBI Data
A technique where a VBI decoder samples the VBI data (such as teletext and captioning data), locks to the timing information, and converts it to binary 0‘s and 1‘s. DC offsets, amplitude variations, and ghosting must be compensated for by the VBI decoder to accurately recover the data.

SMPTE 12M
Defines the longitudinal (LTC) and vertical interval (VITC) timecode for NTSC and PAL video systems. LTC requires an entire field time to store timecode information, using a separate track. VITC uses one scan line each field during the vertical blanking interval.

SMPTE 125M
720 x 480 pro-video interlaced standard (29.97 Hz). Covers the digital representation and the digital parallel interface. Also see BT.601 and BT.656.

SMPTE 170M
NTSC video specification for the United States. See RS-170A and BT.470.

SMPTE 240M
1920 x 1035 pro-video interlaced standard (29.97 or 30 Hz). Covers the analog RGB and YPbPr representation. The digital parallel interface is defined by SMPTE 260M. The digital serial interface is defined by SMPTE 292M.

SMPTE 244M
768 x 486 pro-video interlaced standard (29.97 Hz). Covers the digital representation (composite NTSC video sampled at 4x Fsc) and the digital parallel interface. The digital serial interface is defined by SMPTE 259M.

SMPTE 253M
Analog RGB video interface specification for pro-video SDTV systems.

SMPTE 259M
Pro-video serial digital interface for SMPTE 244M.

SMPTE 260M
Digital representation and parallel interface for SMPTE 240M video.

SMPTE 266M
Defines the digital vertical interval timecode (DVITC). Also see BT.1366.

SMPTE 267M
960 x 480 pro-video interlaced standard (29.97 Hz). Covers the digital representation and the digital parallel interface. Also see BT.601 and BT.1302.

SMPTE 272M
Formatting AES/EBU digital audio and auxiliary data into the digital blanking intervals. Also see BT.1305.

SMPTE 274M
1920 x 1080 pro-video interlaced and progressive standards. Covers the digital representation, the analog RGB and YPbPr interfaces, and the digital parallel interface. The digital serial interface is defined by SMPTE 292M.

SMPTE 276M
Transmission of AES/EBU digital audio and auxiliary data over coaxial cable.

SMPTE 291M
Ancillary data packet and space formatting for pro-video digital interfaces. Also see BT.1364.

SMPTE 292M
1.485 Gbps pro-video HDTV serial interfaces.

SMPTE 293M
720 x 480 pro-video progressive standards (59.94 Hz). Covers the digital representation, the analog RGB and YPbPr interfaces, and the digital parallel interface. The digital serial interface is defined by SMPTE 294M. Also see BT.1358 and BT.1362.

SMPTE 294M
Pro-video serial digital interface for SMPTE 293M.

SMPTE 296M
1280 x 720 pro-video progressive standards. Covers the digital representation and the analog RGB and YPbPr interfaces. The digital parallel interface uses SMPTE 274M. The digital serial interface is defined by SMPTE 292M.

SMPTE 299M
24-bit digital audio format for pro-video HDTV serial interfaces. Also see BT.1365.

SMPTE 305M
Serial data transport interface (SDTI). This is a 270 or 360 Mbps serial interface based on BT.656 that can be used to transfer almost any type of digital data, including MPEG-2 program streams, MPEG-2 transport streams, DV bit streams, etc. You cannot exchange material between devices that use different data types. Material that is created in one data type can only be transported to other devices that support the same data type. There are separate map documents that format each data type into the 305M transport.

SMPTE 308M
MPEG-2 4:2:2 profile at high level.

SMPTE 314M
Data structure for DV-based audio, data and compressed video at 25 and 50 Mbps. Also see IEC 61834.

SMPTE 322M
Data stream format for the exchange of DV-based audio, data and compressed video over a Serial Data Transport Interface (SDTI or SMPTE 305M).

SMPTE 344M
Defines a 540 Mbps serial digital interface for pro-video applications.

SMPTE 348M
High data-rate serial data transport interface (HD-SDTI). This is a 1.485 Gbps serial interface based on SMPTE 292M that can be used to transfer almost any type of digital data, including MPEG-2 program streams, MPEG-2 transport streams, DV bit streams, etc. You cannot exchange material between devices that use different data types. Material that is created in one data type can only be transported to other devices that support the same data type. There are separate map documents that format each data type into the 348M transport.

SMPTE RP160
Analog RGB and YPbPr video interface specification for pro-video HDTV systems.

SPDIF
Short for Sony/Philips Digital InterFace. This is a consumer interface used to transfer digital audio. A serial, self-clocking scheme is used, based on a coax or fiber interconnect. The audio samples may be 16-24 bits each. 16 different sampling rates are supported, with 32, 44.1, and 48 kHz being the most common. IEC 60958 now fully defines this interface for consumer and professional applications.

Split Sync Scrambling
Split sync is a video scrambling technique, usually used with either horizontal blanking inversion, active video inversion, or both. In split sync, the horizontal sync pulse is "split", with the second half of the pulse at +100 IRE instead of the standard -40 IRE. Depending on the scrambling mode, either the entire horizontal blanking interval is inverted about the +30 IRE axis, the active video is inverted about the +30 IRE axis, both are inverted, or neither is inverted. By splitting the horizontal sync pulse, a reference of both -40 IRE and +100 IRE is available to the descrambler.

Since a portion of the horizontal sync is still at -40 IRE, some sync separators may still lock on the shortened horizontal sync pulses. However, the timing circuits that look for color burst a fixed interval after the end of horizontal sync may be confused. In addition, if the active video is inverted, some video information may fall below 0 IRE, possibly confusing sync detector circuits.

The burst is always present at the correct frequency and timing; however, the phase is shifted 180 degrees when the horizontal blanking interval is inverted.

Square Pixels
A "square pixel" is one that has the same number of active samples both horizontally and vertically, for a 1:1 aspect ratio. Computers and HDTV use square pixels.

Using 480 active scan lines for NTSC, if the display had a 1:1 aspect ratio, square pixels would mean there would be 480 active samples per line. Since the display has a 4:3 aspect ratio, the number of active samples is (480)*(4/3) or 640. To get 640 active samples per line, you need a 12.27 MHz sample clock.

Using 576 active scan lines for PAL, if the display had a 1:1 aspect ratio, square pixels would mean there would be 576 active samples per line. Since the display has a 4:3 aspect ratio, the number of active samples is (576)*(4/3) or 768. To get 768 active samples per line, you need a 14.75 MHz sample clock.

Standard Definition Television
See SDTV.

Starsight
An electronic program guide that you subscribe to. It allows you to sort the guide by your order of preference and delete stations you never watch. It‘s a national service, that is regionalized. The decoders in Houston only download data for Houston. Move to Dallas and you only get Dallas. It is present on NTSC lines 14 and 277, and uses encoding similar to EIA-608.

Streaming Video
Compressed audio and video that is transmitted over the Internet or other network in real time. Typical compression techniques are MPEG-2, MPEG-4, Microsoft WMT, RealNetworks, and Apple‘s QuickTime. It usually offers "VCR-style" remote control capabilities such as play, pause, fast forward, and reverse.

Subcarrier
A secondary signal containing additional information that is added to a main signal.

Subsampled
Subsampled means that a signal has been sampled at a lower rate than some other signal in the system. A prime example of this is the 4:2:2 Y‘CbCr color space used in ITU-R BT.601. For every two luma (Y‘) samples, only one Cb and Cr sample is present. This means that the Cb and Cr signals are subsampled.

Subtitles
Text that is added below or over a picture that usually reflects what is being said, possibly in another language. Open subtitles are transmitted as video that already has the subtitles present. Closed subtitles are transmitted during the VBI, and relies on the TV to decode it and position it below or over the picture. Closed captioning is a form of subtitling. Subtitling for DVB is specified in ETSI ETS 300 743.

Super Black
A keying signal that is embedded within the composite video signal as a level between black and sync. It is usually used to improve luma self-keying because the video signal contains black, making a good luma self-key hard to implement. When a downstream keyer detects the super black level, it inserts the second composite video signal.

Super VideoCD (Super VCD, SVCD)
Next generation VideoCD, defined by the China National Technical Committee of Standards on Recording, that hold 35-70 minutes of digital audio and video information. MPEG-2 video is used, with a resolution of 480 x 480 (29.97 Hz frame rate) or 480 x 576 (25 Hz frame rate). Audio uses MPEG-1 layer 2 or MPEG-2 at a bit rate of 32-384 kbps, and supports four mono, two stereo, or 5.1 channels. Subtitles use overlays rather than subpictures (DVD-Video) or being encoded as video (VideoCD). Variable bit-rate encoding is used, with a maximum bit rate of 2.6 Mbps. IEC 62107 defines the Super VideoCD standard.

XSVCD, although not an industry standard, increases the video resolution and bit rate to improve the video quality over SVCD. MPEG-2 video is still used, with a resolution of 720 x 480 (29.97 Hz frame rate) or 720 x 576 (25 Hz frame rate). Variable bit-rate encoding is still used, with a maximum bit rate of 9.8 Mbps.

Superbit DVD
See DVD-Video.

S-VHS
S-VHS is an enhancement to regular VHS video tape decks. S-VHS provides better resolution and less noise than VHS. S-VHS video tape decks support s-video inputs and outputs, although this is not required. It does, however, improve the quality by not having to separate and then merge the luma and chroma signals.

S-Video
Separate video, also called Y/C video. Separate luma (Y‘) and chroma (C) video signals are used, rather than a single composite video signal. By simply adding together the Y‘ and C signals, you generate a composite video signal.

A DC offset of +2.3v may be present on the C signal when a letterbox picture format is present. A DC offset of +5v may be present to indicate when a 16:9 anamorphic picture format is present. A standard 4:3 receiver ignores all DC offsets, thus displaying a typical letterboxed picture.

SVM
See velocity scan modulation.

Sync
Sync is a fundamental, you gotta have it, piece of information for displaying any type of video. Essentially, the sync signal tells the display where to put the picture. The horizontal sync, or HSYNC for short, tells the display where to put the picture in the left-to-right dimension. The vertical sync, or VSYNC for short tells the display where to put the picture from top-to-bottom.

Analog SDTV and EDTV signals use a bi-level sync, where the sync level is a known value below the blanking level. Analog HDTV signals use a tri-level sync, where the sync levels are known values above and below the blanking level.

The reason analog HDTV signals use a tri-level sync signal is timing accu­racy. The horizontal timing reference point for a bi-level sync signal is defined as the 50% point of the leading edge of the horizontal sync pulse. In order to ascertain this point precisely, it is necessary to determine both the blanking level and sync-tip level and determine the mid-point value. If the signal is in any way distorted, this will reduce the timing accuracy.

With a tri-level sync signal, the timing reference point is the rising edge of the sync signal as it passes through the blacking level. This point is much eas­ier to accurately determine, and can be implemented relatively easily. It is also more immune to signal distortion.

Sync Generator
A sync generator is a circuit that provides sync signals. A sync generator may have genlock capability.

Sync Noise Gate
A sync noise gate is used to define an area within the video waveform where the video decoder is to look for the sync pulse. Anything outside of this defined window will be rejected. The main purpose of the sync noise gate is to make sure that the output of the video decoder is nice, clean, and correct.

Sync Stripper
A video signal contains video information, which is the picture to be displayed, and timing (sync) information that tells the receiver where to put this video information on the display. A sync stripper pulls out the sync information from the video signal and throws the rest away.

Synchronous
Refers to two or more events that happen in a system or circuit at the same time.

SVCD
See Super Video CD.

TDF
Telediffusion de France.

Teletext
A method of transmitting data with a video signal. ITU-R BT.653 lists the major teletext systems used around the world, while ETSI ETS 300 706 defines in detail the teletext standard for PAL. North American Broadcast Teletext Specification (NABTS) is 525-line system C.

For digital transmissions such as HDTV and SDTV, the teletext characters are multiplexed as a separate stream along with the video and audio data. It is common practice to actually embed this stream in the MPEG video bitstream itself, rather than at the transport layer. Unfortunately there is no wide-spread standard for this teletext stream -- each system (DSS, DVB, ATSC, DVD) has its own solution.

The practical place in MPEG to stick teletext data is in the user_data field, which can be placed at various frequencies within the video stream. For DVD, it is the group_of_pictures header, which usually proceed intra pictures (this happens about 2 times a second). For ATSC broadcasts the data is inserted in the user_data field of individual picture headers (up to 60 times/sec).

Tessellated Sync
This is what the Europeans call serrated sync. See serration pulses and sync.

Timebase Corrector
Certain video sources have their sync signals screwed up. The most common of these sources is the VCR. A timebase corrector "fixes" a video signal that has bad sync timing.

Tri-Level Sync
A sync signal that has three levels, and is commonly used for analog HDTV signals. See the definition for sync.

True Color
True color means that each sample of an image is individually represented using three color components, such as RGB or Y‘CbCr.

Underscan
When an image is displayed, it is "underscanned" if all of the image, including the top, bottom, and side edges, are visible on the display. Underscan is common in computer displays.

Uplink
The carrier used by Earth stations to transmit information to a satellite.

V chip
See EIA-608.

Variable Bit Rate
Variable bit rate (VBR) means that a bitstream (compressed or uncompressed) has a changing number of bits each second. Simple scenes can be assigned a low bit rate, with complex scenes using a higher bit rate. This enables maintaining the audio and video quality at a more consistent level.

VBI
See vertical blanking interval.

VBR
Abbreviation for variable bit rate.

Velocity Scan Modulation
Commonly used in TVs to increase the apparent sharpness of a picture. At horizontal dark-to-light transitions, the beam scanning speed is momentarily increased approaching the transition, making the display relatively darker just before the transition. Upon passing into the lighter area, the beam speed is momentarily decreased, making the display relatively brighter just after the transition. The reverse occurs in passing from light to dark.

Vertical Blanking Interval
During the vertical blanking interval, the video signal is at the blanking level so as not to display the electron beam when it sweeps back from the bottom to the top side of the CRT screen.

Vertical Interval Timecode
Timecode information is stored on a scan line during each vertical blanking interval.

Vertical Resolution
See resolution.

Vertical Scan Rate
For noninterlaced video, this is the same as the frame rate. For interlaced video, it is usually one-half the field rate.

Vertical Sync
This is the portion of the video signal that tells the decoder where the top of the picture is.

Vestigial Sideband
A method of encoding digital data onto a carrier for RF transmission. 8-VSB is used for over-the-air broadcasting of ATSC HDTV in the USA.

Video Carrier
A specific frequency that is modulated with video data before being mixed with the audio data and transmitted.

Video Interface Port
A digital video interface designed to simplify interfacing video ICs together. One portion is a digital video interface (based on BT.656) designed to simplify interfacing video ICs together. A second portion is a host processor interface. VIP is a VESA specification.

Video Mixing
Video mixing is taking two independent video sources (they must be genlocked) and merging them together. See alpha mix.




Video Modulation
Converting a baseband video signal to an RF signal.

Video Module Interface
A digital video interface designed to simplify interfacing video ICs together. It is being replaced by VIP.

Video-on-Demand
Video-on-demand, or VOD, allows a user to select which program to view at their convenience and playing starts almost immediately. When used over the Internet or other network, it is commonly called "streaming video". For broadcast, satellite and cable networks, it is commonly called "pay-per-view" and is usually confined to specific start times. For this reason, it may also be referred to as "near video-on-demand" or nVOD.

Video Program System
VPS is used in some countries instead of PDC to control VCRs. The data format is the same as for PDC, except that it is transmitted on a dedicated line during the vertical blanking interval, usually line 16.

VideoCD
Compact discs that hold up to about an hour of digital audio and video information. MPEG-1 video is used, with a resolution of 352 x 240 (29.97 Hz frame rate) or 352 x 288 (25 Hz frame rate). Audio uses MPEG-1 layer 2 at a fixed bit rate of 224 kbps, and supports two mono or one stereo channels (with optional Dolby pro-logic). Fixed bit-rate encoding is used, with a bit rate of 1.15 Mbps. The next generation, defined for the Chinese market, is Super VideoCD.

XVCD, although not an industry standard, increases the video resolution and bit rate to improve the video quality over VCD. MPEG-1 video is still used, with a resolution of up to 720 x 480 (29.97 Hz frame rate) or 720 x 576 (25 Hz frame rate). Fixed bit-rate encoding is still used, with a bit rate of 3.5 Mbps.

VIP
See video interface port.

VITC
See vertical interval timecode.

VMI
See video module interface.

VOD
See video-on-demand.

VPS
See video program system.

VSB
See vestigial sideband.

VSM
See velocity scan modulation.

VSYNC
Check out the vertical sync definition.

White Level
This level defines what white is for the particular video system.

Wide Screen Signaling
WSS may be used on (B, D, G, H, I) PAL line 23 and (M) NTSC lines 20 and 283 to specify the aspect ratio of the program and other information. 16:9 TVs may use this information to allow displaying of the program in the correct aspect ratio. ITU-R BT.1119 and ETSI EN 300 294 specify the WSS signal for PAL and NTSC systems. EIA-J CPR-1204 and IEC 61880 also specify another WSS signal for NTSC systems.

World System Teletext
BT.653 525-line and 625-line system B teletext.

WSS
See wide screen signaling.

WST
See world system teletext.

XSVCD
Abbreviation for eXtended Super VideoCD. See Super VideoCD.

Y/C Video
See s-video.

Y/C Separator
A Y/C separator is what‘s used in a video decoder to separate the luma and chroma in a NTSC or PAL system. This is the first thing that any video decoder must do. The composite video signal is fed to a Y/C separator so that the chroma can then be decoded further.

Y‘CbCr, YCbCr
Y‘CbCr is the color space originally defined by BT.601, and now used for all digital component video formats. Y‘ is the luma component and the Cb and Cr components are color difference signals. The technically correct notation is Y‘Cb‘Cr‘ since all three components are derived from R‘G‘B‘. Many people use the YCbCr notation rather than Y‘CbCr or Y‘Cb‘Cr‘.

4:4:4 Y‘CbCr means that for every Y‘ sample, there is one sample each of Cb and Cr.

4:2:2 Y‘CbCr means that for every two horizontal Y‘ samples, there is one sample each of Cb and Cr.

4:1:1 Y‘CbCr means that for every four horizontal Y‘ samples, there is one sample each of Cb and Cr.

4:2:0 Y‘CbCr means that for every block of 2 x 2 Y‘ samples, there is one sample each of Cb and Cr. There are three variations of 4:2:0 YCbCr, with the difference being the position of Cb and Cr sampling relative to Y.

Note that the coefficients to convert R‘G‘B‘ to Y‘CbCr are different for SDTV and HDTV applications.

Y‘IQ, YIQ
Y‘IQ is a color space optionally used by the NTSC video system. The Y‘ component is the black-and-white portion of the image. The I and Q parts are the color difference components; these are effectively nothing more than color placed over the black and white, or luma, component. Many people use the YIQ notation rather than Y‘IQ or Y‘I‘Q‘. The technically correct notation is Y‘I‘Q‘ since all three components are derived from R‘G‘B‘.

Y‘PbPr, YPbPr
Y‘PbPr is a scaled version of the YUV color space, with specific levels and timing signals, designed to interface equipment together. Consumer video standards are defined by EIA-770; the professional video standards are defined by numerous SMPTE standards. VBI data formats for EIA-770 are defined by EIA-805. Many people use the YPbPr notation rather than Y‘PbPr or Y‘Pb‘Pr‘. The technically correct notation is Y‘Pb‘Pr‘ since all three components are derived from R‘G‘B‘.

Y‘UV, YUV
Y‘UV is the color space used by the NTSC and PAL video systems. As with the Y‘IQ color space, the Y‘ is the luma component while the U and V are the color difference components. Many people use the Y‘UV notation when they actually mean Y‘CbCr data. Most use the YUV notation rather than Y‘UV or Y‘U‘V‘. The technically correct notation is Y‘U‘V‘ since all three components are derived from R‘G‘B‘.

YUV is also the name for some component analog interfaces on consumer equipment. Some manufacturers incorrectly label it YCbCr. THX certification will require it to be labeled YPbPr.

YUV9
Intel‘s 4:1:0 YCbCr format. The picture is divided into blocks, with each block comprising 4 x 4 samples. For each block, sixteen 8-bit values of Y, one 8-bit value of Cb, and one 8-bit value of Cr are assigned. The result is an average of 9 bits per pixel.

YUV12
Intel‘s notation for MPEG-1 4:2:0 YCbCr stored in memory in a planar format. The picture is divided into blocks, with each block comprising 2 x 2 samples. For each block, four 8-bit values of Y, one 8-bit value of Cb, and one 8-bit value of Cr are assigned. The result is an average of 12 bits per pixel.

YUY2
Intel‘s notation for 4:2:2 YCbCr format.

Zipper
See the definition for creepy-crawlies.

Zoom
Zoom is a type of image scaling. Zooming is making the picture larger so that you can see more detail. The examples described in the definition of scaling are also examples that could be used here.

Zoomed Video Port
Used on laptops, the ZV Port is a point-to-point uni-directional bus between the PC Card host adapter and the graphics controller, enabling video data to be transferred real-time directly from the PC Card into the graphics frame buffer.

The PC Card host adapter has a special multimedia mode configuration. If a non-ZV PC Card is plugged into the slot, the host adapter is not switched into the multimedia mode, and the PC Card behaves as expected. Once a ZV card has been plugged in and the host adapter has been switched to the multimedia mode, the pin assignments change. The PC Card signals A4-A25, SPKR#, INPACK# and I0IS16# are replaced by ZV Port video signals (Y0-Y7, UV0-UV7, HREF, VSYNC, PCLK) and 4-channel audio signals (MCLK, SCLK, LRCK, and SDATA).

ZV Port
See zoomed video port

Zweiton
A technique of implementing stereo or dual-mono audio for NTSC and PAL video. One FM subcarrier transmits a L+R signal, and a second FM subcarrier transmits a R signal (for stereo) or a second L+R signal. It is discussed in BS.707, and is similar to the BTSC technique.