Monday, March 31, 2008

Chroma Subsampling - Difference between YUV 4:2:0 and YUV 4:1:1


Difference between 4:2:2 and 4:1:1 is the alignment of chroma sample position. See the figure.

The HVS(Human Visual System) has poor response to chrominance spatial detail compared to its response to luminance spatial detail. This property can be exploited to reduce bandwidth requirements by subsampling the chroma components. The most commonly used subsampling patterns are illustrated in Figure. In 4:2:2 subsampling, the chroma components are subsampled by a factor of 2 horizontally. This gives a reduction of about 33% in the overall raw data rate. In 4:1:1 subsampling, the chroma components are subsampled by a factor of 4 horizontally, giving a reduction of 50%. In 4:2:0 subsampling, the chroma components are subsampled by a factor of 2 both horizontally and vertically, giving a reduction of 50% in the overall raw data rate.


Bluetooth

Bluetooth is a wireless communication protocol. Bluetooth is used to communicate to two or more other Bluetooth-capable devices.

Bluetooth Vs Infrared

Of course, wireless communication between two computers is not new. PDAs have been able to do that for years using infrared technology. One drawback to infrared is that the devices involved must be a few feet apart, and most importantly, the infrared transceivers must see each other "eye to eye." If either of those conditions are not met, then the transmission will fail. Bluetooth overcomes the first limitation by having a nominal range of about 10 meters (30 feet). Bluetooth overcomes the second limitation because it works like a radio, so transmissions are omnidirectional. Consequently, there are no line-of-sight issues when communication occurs between two Bluetooth devices.

Bluetooth Vs. 802.11b

If you've heard of Bluetooth before, then you've certainly heard of 802.11b (the wireless LAN protocol), another wireless communication protocol. Bluetooth and 802.11b were created to accomplish two different goals, although both technologies operate in the same frequency band: 2.4 GHz.

The goal of the wireless LAN(802.11b) is to connect two relatively large devices that have lots of power at high speeds. Typically, this technology is used to connect two laptops within 300 feet at 11 Mb/s. This technology is also useful for network administrators who want to extend their LAN to places where it is either expensive or inconvenient to run cables.

On the other hand, Bluetooth is intended to connect smaller devices like PDAs and mobile phones within a range of 30 feet at a rate of 1 Mb/s. Slower data rates and shorter ranges allow Bluetooth to be a lowpower wireless technology. Compared to 802.11b devices, some Bluetooth devices can easily consume 500 times less power, which can make a huge difference in the battery life of many mobile devices.

Bluetooth is also intended to use as a cable replacement technology. If you have multiple peripherals connected to your computer using RS-232 or USB, then Bluetooth is the ideal solution if you want to use those devices wirelessly.


Thursday, March 27, 2008

3D Audio

3D Sound refers to a sound which makes a listener discern significant spatial cues for a sound source such as direction, distance and spatiousness. Therefore generating 3D sound means that one can place sounds anywhere - left or right, up or down, near or far - at one's disposal in 3-dimensional space.

A 3D audio system has the ability to position sounds all around a listener. The sounds are actually created by the loudspeakers(or headphone), but the listener's perception is that the sounds come from arbitrary points in space. This is similar to stereo panning in conventional stereo systems; sounds can be panned to locations between the two loudspeakers, creating virtual images of the sound where there is no loudspeaker. However, conventional stereo systems generally cannot position sounds to the sides or rear of the listener, nor above or below the listener. A 3D audio system attempts to do just that.

3D sound is largely divided into two categories:
  • Positional sound Image
  • Moving sound Image

Video Resolution

Video is a sequence of still images representing scenes in motion.

Video Resolutions

QCIF

176 * 144

CIF

352 * 288

VGA

640 * 480

QVGA

320 * 240

HD

1280 * 720

1920 * 1080

PAL

704 * 576

NTSC

704 * 480

SIF

352 * 240

QSIF

176 * 120

BMP - Bitmap Files

The BMP file format, sometimes called Bitmap, is an image file format used to store bitmap digital images. BMP stores uncompressed(raw) pixel values. Each BMP file consists of 54 byte file header followed by CLUT(Color Look Up Table), if bitcount <= 8, followed by pixel values. Pixels are stored in the BMP files in BGR format. Blue Pixel value is stored first followed by Green pixel value followed by Red pixel value. The first pixel values(BGR) in the BMP file corresponds to the pixel in the bottom left of the Image. Then the second pixel values(BGR) in the BMP file corresponds to the pixel next to the bottom left of the Image. This goes on till the bottom right of the Image. That is raster scanning from the bottom left to bottom right. The next pixel values in the BMP file corresponds to the above line of the bottom line(immediate top line). In uncompressed BMP files, and many other BMP file formats, image pixels are stored with a color depth of 1, 4, 8, 16, 24, or 32 bits per pixel. Images of 8 bits and fewer can be either grayscale or indexed color. Indexed color requires Color Look Up Table. Bitmap files are stored with the name extension .BMP and occasionally we also see bitmap files stored in a device-independent bitmap form with the name extension .DIB. Device-independent bitmap files are simply a list of pixels with values for the red, green and blue components and omit the header information associated with size and other descriptors. The bitmap image format originated in early versions of the Microsoft Windows and has stayed ever since through packages such as Paintbrush/Paint. While the Windows operating system supports images with 1, 4, 8, 16, 24 and 32 bits per pixel we shall largely focus on the use of monochrome and 24-bit colour here. Bitmap File Structure

The bitmap file structure is very simple and consists of a bitmap-file header, a bitmap-information header, a colour table, and an array of bytes that define the bitmap image.
The file has the following form:

  • File Header Information
  • Image Header Information
  • CLUT(if present)
  • Pixel Values
BITMAPFILEHEADER

The bitmap file header contains information about the type, size, and layout of a bitmap file and permits a check as to the type of file the user is reading. The first two bytes of the file contain the ASCII character “B” followed by an “M” to identify the file type.

The next four bytes of the header contain the file size with the least significant bit first. The next four bytes are unused and set to zero.

The final four bytes are an offset to the start of the image pixel data from the header and measured in bytes. Formally the structure is of the form:

BITMAPFILEHEADER {
2 bytes file type
4 bytes file size in bytes
2 bytes reserved
2 bytes reserved
4 bytes offset to data in bytes
} BITMAPFILEHEADER;

BITMAPINFOHEADER

The bitmap-information header specifies the dimensions, compression type, and colour format for the bitmap. The first four bytes are the header size, usually 40 bytes, followed by the width and height of the image measured in pixels. The next two bytes contain 1 which is the number of planes. The next two bytes store the number of bits used to represent the colour intensities of a pixel, which in this text is usually 24 (referred to as true colour) as we frequently use such images. Twenty-four bit colour has over the years become more prevalent as memory has become cheaper and processor speeds have increased. The next four bytes store the compression (0 for 24 bit RGB) followed by the Image size (may be 0 if not compressed). The next eight bytes store the X and Y resolution (pixels/meter). The final entries in the bitmap information section are the number of colour map entries and the number of significant colours. Formally this is written as:

BITMAPINFOHEADER {
4 bytes needed for BITMAPINFOHEADER structuresize
4 bytes bitmap width in pixels
4 bytes bitmap height in pixel
2 bytes 1
2 bytes bits/pixel (1 = monochrome)
4 bytes compression 0, 8, 4
4 bytes image size in bytes (may be 0 for monochrome)
4 bytes pixels/metre
4 bytes pixels/metre
4 bytes number of colour indexes used by bitmap in colour table
4 bytes number of colour indexes considered important
} BITMAPINFOHEADER;

COLOUR LOOK UP TABLE(CLUT)

The colour table is not present for bitmaps with 24 bit files because each pixel is represented by the 8-bit blue-green-red (BGR) values in the actual bitmap data area.

IMAGE DATA

The bitmap data immediately following the colour table consist of BYTE values representing consecutive rows (scan lines) of the bitmap image in left-to-right order. A scan line must be zero-padded to end on a 32-bit boundary or rounded up to a multiple of four bytes. The scan lines in the bitmap are stored from bottom to the top of the image. This means that the first byte in the file represents the pixels in the lower-left corner or origin of the bitmap and the last byte represents the pixels in the upper-right corner.

The format of the file depends on the number of bits used to represent each pixel with the most significant bit field corresponding to the leftmost pixel.

There are different types of BMP files. Some of them are:
  • RGB 24-bit
  • RGB 565
  • RGB 8-bit (or) RGB 256 Color
  • RGB 4-bit (or) RGB 16 Color
RGB 24-bit (or) RGB 888

Most BMP files are of this type. In the 24-bits, each 8-bits are given for red, green and blue. This is the uncompressed file format. This does not require CLUT. The size of the file will be 54 + (width * height * 3) bytes. First 54 is for the BMP file header. 3 is for Red, Green and Blue.


IEEE 802.11 Standards

IEEE 802.11 is an industry standard for a shared, wireless LAN that defines the PHY and MAC sublayer for wireless communications.

802.11 MAC Sublayer

At the MAC sublayer, all the IEEE 802.11 standards use the carrier sense multiple access with collision avoidance (CSMA/CA) MAC protocol. A wireless station with a frame to transmit first listens on the wireless frequency to determine whether another station is currently transmitting (carrier sense). If the medium is used, the wireless station calculates a random backoff delay. After the random backoff delay, the wireless station again listens for a transmitting station. By instituting a random backoff delay, multiple stations that are waiting to transmit do not end up trying to transmit at the same time (collision avoidance).

The CSMA/CA scheme does not prevent all collisions, and it is difficult for a transmitting node to detect that a collision has occurred. Depending on the placement of the wireless access point (AP) and the wireless clients, distance or radio frequency (RF) barriers can also prevent a wireless client from sensing that another wireless node is transmitting (known as the hidden station problem).

To better detect collisions and solve the hidden station problem, IEEE 802.11 uses acknowledgment (ACK) frames and Request to Send (RTS) and Clear to Send (CTS) messages. ACK frames indicate when a wireless frame is successfully received. When a station wants to transmit a frame, it sends an RTS message that indicates the amount of time it needs to send the frame. The wireless AP sends a CTS message to all stations, granting permission to the requesting station and informing all other stations that they are not allowed to transmit for the time reserved by the RTS message. The exchange of RTS and CTS messages eliminates collisions due to hidden stations.


802.11 PHY Sublayer

At the physical (PHY) layer, IEEE 802.11 defines a series of encoding and transmission schemes for wireless communications, the most prevalent of which are the Frequency Hopping Spread Spectrum (FHSS), Direct Sequence Spread Spectrum (DSSS), and Orthogonal Frequency-Division Multiplexing (OFDM) transmission schemes.



IEEE 802.11

The bit rates for the original IEEE 802.11 standard are 2 and 1 Mbps using the FHSS transmission scheme and the S-Band Industrial, Scientific, and Medical (ISM) frequency band, which operates in the frequency range of 2.4 to 2.5 GHz. Under good transmission conditions, 2 Mbps is used; under less-than-ideal conditions, the lower speed of 1 Mbps is used.


802.11b :

The major enhancement to IEEE 802.11 by IEEE 802.11b is the standardization of the physical layer to support higher bit rates. IEEE 802.11b supports two additional speeds, 5.5 Mbps and 11 Mbps, using the S-Band ISM. The DSSS transmission scheme is used in order to provide the higher bit rates. The bit rate of 11 Mbps is achievable in ideal conditions. In less-than-ideal conditions, the slower speeds of 5.5 Mbps, 2 Mbps, and 1 Mbps are used.


802.11a :

IEEE 802.11a (the first standard to be ratified, but just now being widely sold and deployed) operates at a bit rate as high as 54 Mbps and uses the C-Band ISM, which operates in the frequency range of 5.725 to 5.875 GHz. Instead of DSSS, 802.11a uses OFDM, which allows data to be transmitted by subfrequencies in parallel and provides greater resistance to interference and greater throughput. This higher-speed technology enables wireless LAN networking to perform better for video and conferencing applications.

Because they are not on the same frequencies as other S-Band devices (such as cordless phones), OFDM and IEEE 802.11a provide both a higher data rate and a cleaner signal. The bit rate of 54 Mbps is achievable in ideal conditions. In less- than-ideal conditions, the slower speeds of 48 Mbps, 36 Mbps, 24 Mbps, 18 Mbps, 12 Mbps, and 6 Mbps are used.


IEEE 802.11 Operating Modes

IEEE 802.11 defines two operating modes :
  • Infrastructure Mode
  • Ad hoc mode

Regardless of the operating mode, a Service Set Identifier (SSID), also known as the wireless network name, identifies the wireless network. The SSID is a name configured on the wireless AP (for infrastructure mode) or an initial wireless client (for ad hoc mode) that identifies the wireless network. The SSID is periodically advertised by the wireless AP or the initial wireless client using a special 802.11 MAC management frame known as a beacon frame.


Infrastructure Mode

In infrastructure mode, there is at least one wireless AP(Access Point) and one wireless client. The wireless client uses the wireless AP to access the resources of a traditional wired network. The wired network can be an organization intranet or the Internet, depending on the placement of the wireless AP.

Ad hoc Mode

In ad hoc mode, wireless clients communicate directly with each other without the use of a wireless AP.

Ad hoc mode is also called peer-to-peer mode. Wireless clients in ad hoc mode form an Independent Basic Service Set (IBSS). One of the wireless clients, the first wireless client in the IBSS, takes over some of the responsibilities of the wireless AP. These responsibilities include the periodic beaconing process and the authentication of new members. This wireless client does not act as a bridge to relay information between wireless clients.

Ad hoc mode is used to connect wireless clients together when there is no wireless AP present. The wireless clients must be explicitly configured to use ad hoc mode. There can be a maximum of nine members in an ad hoc 802.11 wireless network.


IEEE 802 Standards

The Institute of Electrical and Electronics Engineers (IEEE) Project 802 was formed at the beginning of the 1980s to develop standards for emerging technologies. The IEEE fostered the development of local area networking equipment from different vendors that can work together. In addition, IEEE LAN standards provided a common design goal for vendors to access a relatively larger market than if proprietary equipment were developed. This, in turn, enabled economies of scale to lower the cost of products developed for larger markets. The actual committee tasked with the IEEE Project 802 is referred to as the IEEE Local and Metropolitan Area Network (LAN/WAN) Standards Committee. Its basic charter is to create, maintain, and encourage the use of IEEE/ANSI and equivalent ISO standards primarily within layers 1 and 2 of the ISO Reference Model.

IEEE 802 Series of Standards

Wednesday, March 26, 2008

Image

A group of width pixels horizontally and height pixels vertically is called an Image. There are two types of Images.
1) Grayscale Image
2) Color Image

Grayscale :
It is a monochrome version of a color Image. Color varies from Black, Dark Gray, .., Light Gray, White. Pixel value of 0 represents Black and Pixel Value of 255 represents white. Values in between 0 and 255 varies from Dark Gray to Light Gray.

Color Image :
A Pixel is combination of Red, Green and Blue(RGB). Therefore the size of the Image will be (width * height * 3) bytes. Each (width * height) bytes for Red, Green and Blue respectively.