Friday, May 2, 2008

Is the Zigbee a big threat to Bluetooth?

WIRELESS STANDARDS seem to he breeding. Perhaps as soon as you get two of them nicely settled in an unlicensed hit of spectrum it‘s inevitable. Late last year, ZigBee arrived in the 2.4GHz band, joining the now well-established Bluetooth and Wi-Fi. ZigBee looks rather like Bluetooth but is simpler, has a lower data rate and spends most of its time snoozing. This characteristic means that a node on a ZigBee network should be able to run for six months to two years on just two AA batteries, claim backers.

EARLY PROMOTION

Philips, Motorola, Honeywell, Invensys and Mitsubishi Electric started promoting ZigBee when they formed the ZigBee Alliance in October 2002. This was once they had secure the physical layer(PHY) and media access control(MAC) under the IEEE 802.15.4 WPAN(Wireless Personal Area Network) Standard.

Zigbee

ZigBee is the emerging industrial standard for ad hoc networks based on IEEE 802.15.4. Due to characteristics such as low data rate, low price, and low power consumption, ZigBee is expected to be used in wireless sensor networks for remote monitoring, home control, and industrial automation. Since one of the most important goals is to reduce the installation and running cost, ZigBee stack is embedded in small and cheap micro-controller units. Since tree routing does not require any routing tables to send the packet to the destination, it can be used in ZigBee end devices that have limited resources.

The ZigBee standard can operate in the 2.4GHz band or the 868MHz and 915MHz ISM (industrial, scientific and medical) bands used in Europe and the US respectively It sits below Bluetooth in terms of data rate: 250kbps at 2.4GHz (compared to Bluetooth's 1Mbps) and 20-40kbps in the lower frequency bands. The operational range is 10-75m, compared to 1Om for Bluetooth (without a power amplifier).

One other important difference between ZigBee and Bluetooth is in how their protocols work. ZigBee's uses a basic master-slave configuration suited to static star networks of many infrequently used devices that talk via small data packets. This aspect suits ZigBee to building
automation and the control of multiple lights, security sensors and so on.

Bluetooth's protocol is more complex because it's geared towards handling voice, images and file transfers in ad hoc networks. Bluetooth devices can work peer-to-peer and support scatternets of multiple smaller non-synchronised networks (piconets). The protocol, however, only allows up to eight slave nodes in a basic master-slave piconet set-up. ZigBee allows up to 254 nodes. Masters can talk to each other and the number of nodes can he increased beyond 254 if necessary.

Low latency is another important feature of ZigBee. When a ZigBee device is powered down (all circuitry switched off apart from a clock running at 32kHz), it can wake up and get a packet across a network connection in around 15 milliseconds. A Bluetooth device in a similar state would take around three seconds to wake up and respond. “The latency gives you some power consumption advantages and it‘s important for timing-critical messages. A sensor in an industrial plant needs to get its messages through in milliseconds”.


Wednesday, April 9, 2008

Streaming

Streaming often is referred to as real-time; this is a somewhat vague term. It implies viewing an event as it happens. Typical television systems have latency; it may be milliseconds, but with highly compressed codecs the latency can be some seconds. The primary factor that makes a stream real-time is that there is no intermediate storage of the data packets. There may be some short buffers, like frame stores in the decoder, but the signal essentially streams all the way from the camera to the player. Streamed media is not stored on the local disk in the client machine, unless a download specifically is requested (and allowed).

Just because streaming is real-time does not mean it has to be live. Prerecorded files also can be delivered in real-time. The server delivers the packets to the network at a rate that matches the correct video playback speed.

Applications

Wherever electronic communication is used, the applications for streaming are endless. Streaming can be delivered as a complete video package of linear programming, as a subscription service, or as pay-per-view (PPV). It can form part of an interactive web site or it can be a tool in its own right, for video preview and film dailies. Some applications are:

  • Internet broadcasting (corporate communications)
  • Education (viewing lectures and distance learning)
  • Web-based channels (IP-TV, Internet radio)
  • Video-on-demand (VOD)
  • Music distribution (music on-demand)
  • Internet and intranet browsing of content (asset management)

Tuesday, April 8, 2008

Geographical Information System - GIS

Although many GIS have been successfully implemented, it has become quite clear that two-dimensional maps with most complex contours and color schema cannot precisely present multidimensional and dynamic spatial phenomena. Most GISs in use today have not been designed to support multimedia data and therefore have very limited capability due to the large data volumes, very rich semantics and very different modeling and processing requirements.


Introduction

Geographical Information Systems (GIS) are computer-based systems that enable users to collect, store, process, analyze and present spatial data.

It provides an electronic representation of information, called spatial data, about the Earth’s natural and man-made features. A GIS references these real-world spatial data elements to a coordinate system. These features can be separated into different layers. A GIS system stores each category of information in a separate "layer" for ease of maintenance, analysis, and visualization. For example, layers can represent terrain characteristics, census data, demographics information, environmental and ecological data, roads, land use, river drainage and flood plains, and rare wildlife habitats. Different applications create and use different layers. A GIS can also store attribute data, which is descriptive information of the map features. This attribute information is placed in a database separate from the graphics data but is linked to them. A GIS allows the examination of both spatial and attribute data at the same time. Also, a GIS lets users search the attribute data and relate it to the spatial data. Therefore, a GIS can combine geographic and other types of data to generate maps and reports, enabling users to collect, manage, and interpret location-based information in a planned and systematic way. In short, a GIS can be defined as a computer system capable of assembling, storing, manipulating, and displaying geographically referenced information.

GIS systems are dynamic and permit rapid updating, analysis, and display. They use data from many diverse sources such as satellite imagery, aerial photos, maps, ground surveys, and global positioning systems (GPS).

Multimedia and Geographical Information System (GIS)

Multimedia

Multimedia is a technology that encompasses various types of data and presents them in an integrated form. There are several types of data that are used by the technology, including text, graphics, hyperlinks, images, sound, digital and analogue video and animation.

Although many GIS have been successfully implemented, it has become quite clear that two-dimensional maps cannot precisely present multidimensional and dynamic spatial phenomena. Moreover, there is a growing need towards accessing spatial data. It seems that merging GIS and Multimedia is a way to deal with these issues.

The latest advances in computer industry especially in hardware have led to the development of the Multimedia and Geographical Information System (GIS) technologies. Multimedia provides communications using text, graphics, animation, and video. Multimedia GIS systems is a way to overcome the limitations displayed by the technologies when they are used separately. Multimedia can extend GIS capabilities of presenting geographic and other information. The combination of several media often results in a powerful and richer presentation of information and ideas to stimulate interest and enhance information retention. They can also make GIS more friendly and easier to use. On the other hand, multimedia can benefit from GIS by gaining an environment which facilitates the use and analysis of spatial data. The result is a system, which has the advantages of both worlds without retaining most of their disadvantages.


Monday, April 7, 2008

DM6437 Digital Video Development Platform

The TMS320C64x+™ DSPs (including the TMS320DM6437 device) are the highest-performance fixed-point DSP generation in the TMS320C6000™ DSP platform. The DM6437 device is based on the third-generation high-performance, advanced VelociTI™ very-long-instruction-word (VLIW) architecture developed by Texas Instruments (TI), making these DSPs an excellent choice for digital media applications. The C64x+™ devices are upward code-compatible from previous devices that are part of the C6000™ DSP platform. The C64x™ DSPs support added functionality and have an expanded instruction set from previous devices.

Click here to know more about TI DM6437 DSP


Thursday, April 3, 2008

Managing Memory Consumption

Memory consumption forms a major concern in the design of software for DSP/mobile devices. At the same time, a dynamically linked library is often the smallest unit of software that can be realistically managed when developing software for mobile devices. One particular detail that should be considered is that when managing memory consumption, some of the available memory will necessarily be allocated for implementing management routines.

Memory Limit

Setting explicit limits regarding memory usage for all parts of the system is one way to manifest the importance of controlling memory usage. Therefore, make all dynamically linked libraries (and other development-time entities) as well as their developers responsible for the memory they allocate. This can be achieved, for instance, by monitoring all memory reservations made by a library or a program. This can be achieved, for example, using the following routine, where MyLimit is the maximum amount of memory the library (or subsystem) can allocate and myMemory refers to the amount of allocated memory.

void * myMalloc(size_t size)
{
#ifdef MEMORY_LIMITATION_ACTIVE
if (myMemory + size > myLimit) return 0;
else { // Limit not reached.
void * tmp = malloc(size);
// Update myMemory if allocation succeeded.
if (tmp) myMemory = myMemory + size;
return tmp;
}
#else
return malloc(size);
#endif
}

While the above procedure only monitors memory space used for variables, the same approach is applicable to programs’ memory consumption as well. Then, the role of the approach is to place developers’ focus on memory consumption during the design. Furthermore, designing memory usage in advance creates an option to invest memory to some parts of a system for increased flexibility, and to optimize for small memory footprint on parts that are not to be altered. In order to be able to give realistic estimates for future releases when setting memory limits for them, one should maintain a database of memory consumption of previous releases to monitor the evolution of the consumption. Moreover, more precise estimates of the final results can be obtained by also including estimates made at different phases of the development project into the database, which can be used for evaluating the effect of errors in estimates made in the planning and design phases.


Wednesday, April 2, 2008

DSP Engineer

What is a DSP Engineer?

Click the above link for the detailed explanation from DSP Designline.

Monday, March 31, 2008

Chroma Subsampling - Difference between YUV 4:2:0 and YUV 4:1:1


Difference between 4:2:2 and 4:1:1 is the alignment of chroma sample position. See the figure.

The HVS(Human Visual System) has poor response to chrominance spatial detail compared to its response to luminance spatial detail. This property can be exploited to reduce bandwidth requirements by subsampling the chroma components. The most commonly used subsampling patterns are illustrated in Figure. In 4:2:2 subsampling, the chroma components are subsampled by a factor of 2 horizontally. This gives a reduction of about 33% in the overall raw data rate. In 4:1:1 subsampling, the chroma components are subsampled by a factor of 4 horizontally, giving a reduction of 50%. In 4:2:0 subsampling, the chroma components are subsampled by a factor of 2 both horizontally and vertically, giving a reduction of 50% in the overall raw data rate.


Bluetooth

Bluetooth is a wireless communication protocol. Bluetooth is used to communicate to two or more other Bluetooth-capable devices.

Bluetooth Vs Infrared

Of course, wireless communication between two computers is not new. PDAs have been able to do that for years using infrared technology. One drawback to infrared is that the devices involved must be a few feet apart, and most importantly, the infrared transceivers must see each other "eye to eye." If either of those conditions are not met, then the transmission will fail. Bluetooth overcomes the first limitation by having a nominal range of about 10 meters (30 feet). Bluetooth overcomes the second limitation because it works like a radio, so transmissions are omnidirectional. Consequently, there are no line-of-sight issues when communication occurs between two Bluetooth devices.

Bluetooth Vs. 802.11b

If you've heard of Bluetooth before, then you've certainly heard of 802.11b (the wireless LAN protocol), another wireless communication protocol. Bluetooth and 802.11b were created to accomplish two different goals, although both technologies operate in the same frequency band: 2.4 GHz.

The goal of the wireless LAN(802.11b) is to connect two relatively large devices that have lots of power at high speeds. Typically, this technology is used to connect two laptops within 300 feet at 11 Mb/s. This technology is also useful for network administrators who want to extend their LAN to places where it is either expensive or inconvenient to run cables.

On the other hand, Bluetooth is intended to connect smaller devices like PDAs and mobile phones within a range of 30 feet at a rate of 1 Mb/s. Slower data rates and shorter ranges allow Bluetooth to be a lowpower wireless technology. Compared to 802.11b devices, some Bluetooth devices can easily consume 500 times less power, which can make a huge difference in the battery life of many mobile devices.

Bluetooth is also intended to use as a cable replacement technology. If you have multiple peripherals connected to your computer using RS-232 or USB, then Bluetooth is the ideal solution if you want to use those devices wirelessly.


Thursday, March 27, 2008

3D Audio

3D Sound refers to a sound which makes a listener discern significant spatial cues for a sound source such as direction, distance and spatiousness. Therefore generating 3D sound means that one can place sounds anywhere - left or right, up or down, near or far - at one's disposal in 3-dimensional space.

A 3D audio system has the ability to position sounds all around a listener. The sounds are actually created by the loudspeakers(or headphone), but the listener's perception is that the sounds come from arbitrary points in space. This is similar to stereo panning in conventional stereo systems; sounds can be panned to locations between the two loudspeakers, creating virtual images of the sound where there is no loudspeaker. However, conventional stereo systems generally cannot position sounds to the sides or rear of the listener, nor above or below the listener. A 3D audio system attempts to do just that.

3D sound is largely divided into two categories:
  • Positional sound Image
  • Moving sound Image

Video Resolution

Video is a sequence of still images representing scenes in motion.

Video Resolutions

QCIF

176 * 144

CIF

352 * 288

VGA

640 * 480

QVGA

320 * 240

HD

1280 * 720

1920 * 1080

PAL

704 * 576

NTSC

704 * 480

SIF

352 * 240

QSIF

176 * 120

BMP - Bitmap Files

The BMP file format, sometimes called Bitmap, is an image file format used to store bitmap digital images. BMP stores uncompressed(raw) pixel values. Each BMP file consists of 54 byte file header followed by CLUT(Color Look Up Table), if bitcount <= 8, followed by pixel values. Pixels are stored in the BMP files in BGR format. Blue Pixel value is stored first followed by Green pixel value followed by Red pixel value. The first pixel values(BGR) in the BMP file corresponds to the pixel in the bottom left of the Image. Then the second pixel values(BGR) in the BMP file corresponds to the pixel next to the bottom left of the Image. This goes on till the bottom right of the Image. That is raster scanning from the bottom left to bottom right. The next pixel values in the BMP file corresponds to the above line of the bottom line(immediate top line). In uncompressed BMP files, and many other BMP file formats, image pixels are stored with a color depth of 1, 4, 8, 16, 24, or 32 bits per pixel. Images of 8 bits and fewer can be either grayscale or indexed color. Indexed color requires Color Look Up Table. Bitmap files are stored with the name extension .BMP and occasionally we also see bitmap files stored in a device-independent bitmap form with the name extension .DIB. Device-independent bitmap files are simply a list of pixels with values for the red, green and blue components and omit the header information associated with size and other descriptors. The bitmap image format originated in early versions of the Microsoft Windows and has stayed ever since through packages such as Paintbrush/Paint. While the Windows operating system supports images with 1, 4, 8, 16, 24 and 32 bits per pixel we shall largely focus on the use of monochrome and 24-bit colour here. Bitmap File Structure

The bitmap file structure is very simple and consists of a bitmap-file header, a bitmap-information header, a colour table, and an array of bytes that define the bitmap image.
The file has the following form:

  • File Header Information
  • Image Header Information
  • CLUT(if present)
  • Pixel Values
BITMAPFILEHEADER

The bitmap file header contains information about the type, size, and layout of a bitmap file and permits a check as to the type of file the user is reading. The first two bytes of the file contain the ASCII character “B” followed by an “M” to identify the file type.

The next four bytes of the header contain the file size with the least significant bit first. The next four bytes are unused and set to zero.

The final four bytes are an offset to the start of the image pixel data from the header and measured in bytes. Formally the structure is of the form:

BITMAPFILEHEADER {
2 bytes file type
4 bytes file size in bytes
2 bytes reserved
2 bytes reserved
4 bytes offset to data in bytes
} BITMAPFILEHEADER;

BITMAPINFOHEADER

The bitmap-information header specifies the dimensions, compression type, and colour format for the bitmap. The first four bytes are the header size, usually 40 bytes, followed by the width and height of the image measured in pixels. The next two bytes contain 1 which is the number of planes. The next two bytes store the number of bits used to represent the colour intensities of a pixel, which in this text is usually 24 (referred to as true colour) as we frequently use such images. Twenty-four bit colour has over the years become more prevalent as memory has become cheaper and processor speeds have increased. The next four bytes store the compression (0 for 24 bit RGB) followed by the Image size (may be 0 if not compressed). The next eight bytes store the X and Y resolution (pixels/meter). The final entries in the bitmap information section are the number of colour map entries and the number of significant colours. Formally this is written as:

BITMAPINFOHEADER {
4 bytes needed for BITMAPINFOHEADER structuresize
4 bytes bitmap width in pixels
4 bytes bitmap height in pixel
2 bytes 1
2 bytes bits/pixel (1 = monochrome)
4 bytes compression 0, 8, 4
4 bytes image size in bytes (may be 0 for monochrome)
4 bytes pixels/metre
4 bytes pixels/metre
4 bytes number of colour indexes used by bitmap in colour table
4 bytes number of colour indexes considered important
} BITMAPINFOHEADER;

COLOUR LOOK UP TABLE(CLUT)

The colour table is not present for bitmaps with 24 bit files because each pixel is represented by the 8-bit blue-green-red (BGR) values in the actual bitmap data area.

IMAGE DATA

The bitmap data immediately following the colour table consist of BYTE values representing consecutive rows (scan lines) of the bitmap image in left-to-right order. A scan line must be zero-padded to end on a 32-bit boundary or rounded up to a multiple of four bytes. The scan lines in the bitmap are stored from bottom to the top of the image. This means that the first byte in the file represents the pixels in the lower-left corner or origin of the bitmap and the last byte represents the pixels in the upper-right corner.

The format of the file depends on the number of bits used to represent each pixel with the most significant bit field corresponding to the leftmost pixel.

There are different types of BMP files. Some of them are:
  • RGB 24-bit
  • RGB 565
  • RGB 8-bit (or) RGB 256 Color
  • RGB 4-bit (or) RGB 16 Color
RGB 24-bit (or) RGB 888

Most BMP files are of this type. In the 24-bits, each 8-bits are given for red, green and blue. This is the uncompressed file format. This does not require CLUT. The size of the file will be 54 + (width * height * 3) bytes. First 54 is for the BMP file header. 3 is for Red, Green and Blue.


IEEE 802.11 Standards

IEEE 802.11 is an industry standard for a shared, wireless LAN that defines the PHY and MAC sublayer for wireless communications.

802.11 MAC Sublayer

At the MAC sublayer, all the IEEE 802.11 standards use the carrier sense multiple access with collision avoidance (CSMA/CA) MAC protocol. A wireless station with a frame to transmit first listens on the wireless frequency to determine whether another station is currently transmitting (carrier sense). If the medium is used, the wireless station calculates a random backoff delay. After the random backoff delay, the wireless station again listens for a transmitting station. By instituting a random backoff delay, multiple stations that are waiting to transmit do not end up trying to transmit at the same time (collision avoidance).

The CSMA/CA scheme does not prevent all collisions, and it is difficult for a transmitting node to detect that a collision has occurred. Depending on the placement of the wireless access point (AP) and the wireless clients, distance or radio frequency (RF) barriers can also prevent a wireless client from sensing that another wireless node is transmitting (known as the hidden station problem).

To better detect collisions and solve the hidden station problem, IEEE 802.11 uses acknowledgment (ACK) frames and Request to Send (RTS) and Clear to Send (CTS) messages. ACK frames indicate when a wireless frame is successfully received. When a station wants to transmit a frame, it sends an RTS message that indicates the amount of time it needs to send the frame. The wireless AP sends a CTS message to all stations, granting permission to the requesting station and informing all other stations that they are not allowed to transmit for the time reserved by the RTS message. The exchange of RTS and CTS messages eliminates collisions due to hidden stations.


802.11 PHY Sublayer

At the physical (PHY) layer, IEEE 802.11 defines a series of encoding and transmission schemes for wireless communications, the most prevalent of which are the Frequency Hopping Spread Spectrum (FHSS), Direct Sequence Spread Spectrum (DSSS), and Orthogonal Frequency-Division Multiplexing (OFDM) transmission schemes.



IEEE 802.11

The bit rates for the original IEEE 802.11 standard are 2 and 1 Mbps using the FHSS transmission scheme and the S-Band Industrial, Scientific, and Medical (ISM) frequency band, which operates in the frequency range of 2.4 to 2.5 GHz. Under good transmission conditions, 2 Mbps is used; under less-than-ideal conditions, the lower speed of 1 Mbps is used.


802.11b :

The major enhancement to IEEE 802.11 by IEEE 802.11b is the standardization of the physical layer to support higher bit rates. IEEE 802.11b supports two additional speeds, 5.5 Mbps and 11 Mbps, using the S-Band ISM. The DSSS transmission scheme is used in order to provide the higher bit rates. The bit rate of 11 Mbps is achievable in ideal conditions. In less-than-ideal conditions, the slower speeds of 5.5 Mbps, 2 Mbps, and 1 Mbps are used.


802.11a :

IEEE 802.11a (the first standard to be ratified, but just now being widely sold and deployed) operates at a bit rate as high as 54 Mbps and uses the C-Band ISM, which operates in the frequency range of 5.725 to 5.875 GHz. Instead of DSSS, 802.11a uses OFDM, which allows data to be transmitted by subfrequencies in parallel and provides greater resistance to interference and greater throughput. This higher-speed technology enables wireless LAN networking to perform better for video and conferencing applications.

Because they are not on the same frequencies as other S-Band devices (such as cordless phones), OFDM and IEEE 802.11a provide both a higher data rate and a cleaner signal. The bit rate of 54 Mbps is achievable in ideal conditions. In less- than-ideal conditions, the slower speeds of 48 Mbps, 36 Mbps, 24 Mbps, 18 Mbps, 12 Mbps, and 6 Mbps are used.


IEEE 802.11 Operating Modes

IEEE 802.11 defines two operating modes :
  • Infrastructure Mode
  • Ad hoc mode

Regardless of the operating mode, a Service Set Identifier (SSID), also known as the wireless network name, identifies the wireless network. The SSID is a name configured on the wireless AP (for infrastructure mode) or an initial wireless client (for ad hoc mode) that identifies the wireless network. The SSID is periodically advertised by the wireless AP or the initial wireless client using a special 802.11 MAC management frame known as a beacon frame.


Infrastructure Mode

In infrastructure mode, there is at least one wireless AP(Access Point) and one wireless client. The wireless client uses the wireless AP to access the resources of a traditional wired network. The wired network can be an organization intranet or the Internet, depending on the placement of the wireless AP.

Ad hoc Mode

In ad hoc mode, wireless clients communicate directly with each other without the use of a wireless AP.

Ad hoc mode is also called peer-to-peer mode. Wireless clients in ad hoc mode form an Independent Basic Service Set (IBSS). One of the wireless clients, the first wireless client in the IBSS, takes over some of the responsibilities of the wireless AP. These responsibilities include the periodic beaconing process and the authentication of new members. This wireless client does not act as a bridge to relay information between wireless clients.

Ad hoc mode is used to connect wireless clients together when there is no wireless AP present. The wireless clients must be explicitly configured to use ad hoc mode. There can be a maximum of nine members in an ad hoc 802.11 wireless network.


IEEE 802 Standards

The Institute of Electrical and Electronics Engineers (IEEE) Project 802 was formed at the beginning of the 1980s to develop standards for emerging technologies. The IEEE fostered the development of local area networking equipment from different vendors that can work together. In addition, IEEE LAN standards provided a common design goal for vendors to access a relatively larger market than if proprietary equipment were developed. This, in turn, enabled economies of scale to lower the cost of products developed for larger markets. The actual committee tasked with the IEEE Project 802 is referred to as the IEEE Local and Metropolitan Area Network (LAN/WAN) Standards Committee. Its basic charter is to create, maintain, and encourage the use of IEEE/ANSI and equivalent ISO standards primarily within layers 1 and 2 of the ISO Reference Model.

IEEE 802 Series of Standards

Wednesday, March 26, 2008

Image

A group of width pixels horizontally and height pixels vertically is called an Image. There are two types of Images.
1) Grayscale Image
2) Color Image

Grayscale :
It is a monochrome version of a color Image. Color varies from Black, Dark Gray, .., Light Gray, White. Pixel value of 0 represents Black and Pixel Value of 255 represents white. Values in between 0 and 255 varies from Dark Gray to Light Gray.

Color Image :
A Pixel is combination of Red, Green and Blue(RGB). Therefore the size of the Image will be (width * height * 3) bytes. Each (width * height) bytes for Red, Green and Blue respectively.