Saturday, December 12, 2009

Analog and Digital

The word analog  denotes a phenomenon that is continuously variable, such as a sound wave. The word  digital, on the other hand, implies a discrete, exactly countable value that can be represented as a series of digits. Sound recording provides familiar examples of both approaches. Recording a phonograph record involves electromechanically transferring a physical signal  into an “analogous” physical representation. Recording a CD, on the other hand, involves sampling  the sound level at thousands of discrete instances and storing the results in a physical representation of a numeric format that can in turn be used to drive the playback device. Virtually all modern computers depend on the manipulation of discrete signals in one of two states denoted by the numbers 1 and 0.

Whether the 1 indicates the presence of an electrical charge, a voltage level, a magnetic state, a pulse of light, or some other phenomenon, at a given point there is either “something” (1) or “nothing” (0). This is the most natural way to represent a series of such states. Digital representation has several advantages over analog. Since computer circuits based on binary logic can be driven to perform calculations electronically at ever-increasing speeds, even problems where an analog computer better modeled nature can now be done more efficiently with digital machines. Data stored in digitized form is not subject to the gradual wear or distortion of the medium that plagues analog representations such as the phonograph record. Perhaps most important, because digital representations are at base simply numbers, an infinite variety of digital representations can be stored in files and manipulated, regardless of whether they started as pictures, music, or text


Converting between Analog and Digital Representations:-
Because digital devices  are the mechanism of choice for working with representations of text, graphics, and sound, a variety of devices are used to digitize analog inputs so the data can be stored  and manipulated. Conceptually, each digitizing device can be thought of as having three parts: a component that scans the input and generates an analog signal, a circuit that converts the analog signal from the input to a digital format, and a component that stores the resulting digital data for later use. For example, in the ubiquitous flatbed scanner a moving head reads varying light levels on the paper and converts them to Most natural phenomena such as light or sound intensity are analog values that vary continuously. To convert such  measurements to a digital representation, “snapshots” or sample readings must be taken at regular intervals. Sampling more frequently gives a more accurate representation of the original analog data, but at a cost in memory and processor resources. 12 analog and digital a varying level of current. This analog signal is in turn converted into a digital reading by an analog-todigital converter, which creates numeric information that represents discrete spots (pixels) representing either levels of gray or of particular colors. This information is then written to disk using the formats supported by the operating system and the software that will manipulate them.

America Online (AOL)

For millions of PC users in the 1990s, “going online” meant connecting to America Online. However, this once dominant service provider has had difficulty adapting to the changing world of the Internet. By the mid-1980s a growing number of PC users were starting to go online, mainly dialing up small bulletin board services. Generally these were run by individuals from their homes, offering a forum for discussion and a way for users to upload and download games and other free software and shareware. However, some entrepreneurs saw the possibility of creating a commercial information service that would be interesting and useful enough that users would pay a monthly subscription fee for access. Perhaps the first such enterprise to be successful was Quantum Computer Services, founded by Jim Kimsey in 1985 and soon joined by another young entrepreneur, Steve Case. Their strategy was to team up with personal computer makers such as Commodore, Apple, and IBM to provide special online services for their users. In 1989 Quantum Link changed its name to America Online.

 In 1991 Steve Case became CEO, taking over from the retiring Kimsey. Case’s approach to marketing AOL was to aim the service at novice PC users who had trouble mastering arcane DOS  commands and interacting with text-based bulletin boards and primitive terminal programs. As an alternative, AOL provided a complete software package that managed the user’s connection, presented “friendly” graphics, and offered point-andclick access to features. Chat rooms and discussion boards were also expanded and offered in a variety of formats for casual and more formal use. Gaming, too, was a major emphasis of the early AOL, with some of the first online multiplayer fantasy roleplaying games such as a version of Dungeons and Dragons called Neverwinter Nights.

A third popular application has been instant messaging , including a feature that allowed users to set up “buddy lists” of their friends and keep track of when they were online. Internet Chall enge By 1996 the World Wide Web was becoming popular. Rather than signing up with a proprietary service such as AOL, users could simply get an account with a lower-cost direct-connection service  and then use a Web browser such as Netscape to access information and services. AOL was slow in adapting to the growing use of the Internet. At first, the service provided only limited access to the Web. Gradually, however, AOL offered a more seamless Web experience, allowing users to run their own browsers and other software together with the proprietary interface.


Also, responding to competition, AOL replaced its hourly rates with a flat monthly fee . Overall, AOL increasingly struggled with trying to fulfill two distinct roles: Internet access provider and content provider. By the late 1990s AOL’s monthly rates were higher than those of “no frills” access providers such as NetZero. AOL tried to  compensate for this by offering integration of services  and news and other content not available on the open Internet. AOL also tried to shore up its user base with aggressive marketing to users who wanted to go online but were not sure how to do so. But while it was easy to get started with AOL, some users began to complain that the service would keep billing them even after they had repeatedly attempted to cancel it. Meanwhile, AOL users got little respect from the more sophisticated inhabitants of cyberspace, who often complained that the clueless “newbies” were cluttering newsgroups and chat rooms.  The hope was that the new $350 billion company would be able to leverage its huge subscriber base and rich media resources to dominate the online world.

Amazon.com

Beginning modestly in 1995 as an online bookstore, Amazon.com became one of the first success stories of the early Internet economy. Named for the world’s largest river, Amazon.com was the brainchild of  entrepreneur Jeffrey Bezos. Like a number of other entrepreneurs of the early 1990s, Bezos had been searching for a way to market to the growing number of people who were going online. He soon decided that books were a good first product, since they were popular, nonperishable, relatively compact, and easy to ship. Several million books are in print at any one time, with about 275,000 titles or editions added in 2007 in the United States alone. Traditional “brick and mortar” (physical) bookstores might carry a few thousand titles up to perhaps 200,000 for the largest chains. Bookstores in turn stock their shelves mainly through major  book distributors that serve as intermediaries between publishers and the public.

For an online bookstore such as Amazon.com, however, the number of titles that can be made available is  limited only by the amount of warehouse space the store is willing to maintain—and no intermediary between publisher and bookseller is needed. From the start, Amazon.com’s business model has capitalized on this potential for variety and the ability to serve almost any niche interest. Over the years the company’s offerings have expanded beyond books to 34 different categories of merchandise, including software, music, video, electronics, apparel, home furnishings, and even nonperishable gourmet food and groceries.

Expansion and Profitability:-
Because of its desire to build a very diverse product line, Amazon.com, unusually for a business startup, did not expect to become profitable for about five years. The growing revenues were largely poured back into expansion. In the heated atmosphere of the Internet boom of the late 1990s, many other Internet-based businesses echoed that philosophy, and many went out of business following the bursting of the so-called dot-com bubble of the early 2000s. Some analysts questioned whether even the hugely popular Amazon.com would ever be able to convert its business volume into an operating profit. However, the company achieved its first profitable year in 2003 . Since then growth has remained steady and generally impressive:

In 2005, Amazon.com earned $8.49 billion revenues with a net income of $359 million. By then the company had about 12,000 employees and had been added to the S&P 500 stock index. In 2006 the company   maintained its strategy of investing in innovation rather than focusing on short-term profits. Its latest initiatives include selling digital versions of books and magazine articles, new arrangements to sell video content, and even a venture into moviemaking. By year end, annual revenue had increased to $10.7 billion.

Algorithms in Computer Science

Just as a cook learns both general techniques such as how to sauté or how to reduce a sauce and a repertoire of specific recipes, a student of computer science learns both general problem-solving principles and the details of common algorithms. These include a variety of algorithms for organizing data , for numeric problems, and for the manipulation of data structures. A working programmer faced with a new task first tries to think of familiar algorithms that might be applicable to the current problem, perhaps with some adaptation. For example, since a variety of well-tested and well-understood sorting algorithms have been developed, a programmer is likely to apply an existing algorithm to a sorting problem rather than attempt to come up with something entirely new. Indeed, for most widely used programming languages there are packages of modules or procedures that implement commonly needed data structures and algorithms.

If a problem requires the development of a new algorithm, the designer will first attempt to determine whether the problem can, at least in theory, be solved . Some kinds of problems have been shown to have no  guaranteed answer. If a new algorithm seems feasible, principles found to be effective in the past will be employed, such as breaking complex problems algorithm down into component parts or building up from the simplest case to generate a solution. For example, the merge-sort algorithm divides the data to be sorted into successively smaller portions until they are sorted, and then merges the sorted portions back together. Another important aspect of algorithm design is choosing an appropriate way to organize the data.

For example, a sorting algorithm that uses a branching  structure would probably use a data structure that implements the nodes of a tree and the operations for adding, deleting, or moving them .Once the new algorithm has been outlined, it is often desirable to demonstrate that it will work for any suitable data. Mathematical techniques such as the finding and proving of loop invariants  can be used to demonstrate the correctness of the implementation of the algorithm.

Practical Considerations:-
It is not enough that an algorithm be reliable and correct, it must also be accurate and efficient enough for its intended use. A numerical algorithm that accumulates too much error through rounding or truncation of intermediate results may not be accurate enough for a scientific application. An algorithm that works by successive approximation or convergence on an answer may require too many iterations even for today’s fast computers, or may consume too much of other computing resources such as memory. On the other hand, as computers become more and more powerful and processors are combined to create more powerful supercomputers, algorithms that were previously considered impracticable might be reconsidered. Code profiling  and techniques for creating more efficient code can help in some cases. It is also necessary to keep in mind special cases where an otherwise efficient algorithm becomes much less efficient.

Sometimes an exact solution cannot be mathematically guaranteed or would take too much time and resources to calculate, but an approximate solution is acceptable. A socalled “greedy algorithm” can proceed in stages, testing at each stage whether the solution is “good enough.” Another approach is to use an algorithm that can produce a reasonable if not optimal solution. For example, if a group of tasks must be apportioned among several people so that all tasks are completed in the shortest possible time, the time needed to find an exact solution rises exponentially with the number of workers and tasks.

But an algorithm that first sorts the tasks by decreasing length and then distributes them among the workers by “dealing” them one at a time like cards at a bridge table will, as demonstrated by Ron Graham, give an allocation guaranteed to be within 4/3 of the optimal result—quite suitable for most applications.An interesting approach to optimizing the solution to a problem is allowing a number of separate programs to “compete,” with those showing the best performance surviving and exchanging pieces of code with other successful programs. This of course mimics evolution by natural selection in the biological world.

Friday, December 11, 2009

ALGOL

The 1950s and early 1960s saw the emergence of two highlevel computer languages into widespread use. The first was designed to be an efficient language for performing scientific calculations. The second was designed for business applications, with an emphasis on data processing. However many programs continued to be coded in low-level languages  designed to take advantages of the hardware features of particular
machines. In order to be able to easily express and share methods of calculation , leading programmers Ajax is a way to quickly and efficiently update dynamic Web pages—formatting is separate from content, making it easy to revise the latter.

Algol began to seek a “universal” programming language that was not designed for a particular application or hardware platform. By 1957, the German GAMM (Gesellschaft für angewandte Mathematik und Mechanik) and the American ACM (Association for Computing Machinery) had joined forces to develop the  specifications for such a language. The result became known as the Zurich Report or Algol-58, and it was  refined into the first widespread implementation of the language, Algol-60. Language Features Algol is a  block-structured, procedural language. Each variable is declared to belong to one of a small number of kinds
of data including integer, real number , or a series of values of either type . While the number of types is limited and there is no facility for defining new types, the compiler’s type checking  introduced a level of security not found in most earlier languages.

An Algol program can contain a number of separate procedures or incorporate externally defined procedures, and the variables with the same name in different procedure blocks do not interfere with one another. A procedure can call itself . Standard control structures  were provided. The following simple Algol program stores the numbers from 1 to 10 in an array while adding them up, then prints the total:

begin
integer array ints[1:10];
integer counter, total;
total := 0;
for counter :=1 step 1 until counter > 10
do
begin
ints [counter] := counter;
total := total + ints[counter];
end;
printstring “The total is:”;
printint (total);
end


Algol’s Legacy:-
The revision that became known as Algol-68 expanded the variety of data types  and added user-defined types and “structs”. Pointers  were also implemented, and flexibility was added to the parameters that could be passed to and from procedures. Although Algol was used as a production language in some computer centers , its relative complexity and unfamiliarity impeded its acceptance, as did the widespread corporate backing for the rival languages FORTRAN and especially COBOL. Algol achieved its greatest success in two respects: for a time it became the language of choice for describing new algorithms for computer scientists, and its structural features would be adopted in the new procedural languages that emerged in
the 1970s

Ajax (Asynchronous JavaScript and XML)

With the tremendous growth in Web usage comes a challenge to deliver Web-page content more efficiently and with greater flexibility. This is desirable to serve adequately the many users who still rely on relatively low-speed dial-up Internet connections and to reduce the demand on Web servers. Ajax  takes advantage of several emerging Web-development technologies to allow Web pages to interact with users while keeping the amount of data to be transmitted to a minimum. In keeping with modern Web-design principles, the organization of the Web page is managed by coding in XHTML, a dialect of HTML that uses the stricter rules and Ajax grammar of the data-description markup language XM L.

Behavior such as the presentation and processing of forms or user controls is usually handled by a scripting
language . Ajax techniques tie these forms of processing together so that only the part of the Web page affected by current user activity needs to be updated. Only a small amount of data needs to be received from the server, while most of the HTML code needed to update the page is generated on the client side - that is, in the Web browser. Besides making Web pages more flexible and interactive, Ajax also makes it much easier to develop more elaborate applications, even delivering fully functional applications such as word processing and spreadsheets over the Web. Some critics of Ajax have decried its reliance on JavaScript,
arguing that the language has a hard-to-use syntax similar to the C language and poorly implements objects.


There is also a need to standardize behavior across the popular Web browsers. Nevertheless, Ajax has rapidly caught on in the Web development community, filling bookstore shelves with books on applying Ajax techniques to a variety of other languages. Ajax can be simplified by providing a framework of objects and methods that the programmer can use to set up and manage the connections between server and browser. Flapjax is a complete high-level programming language that uses the same syntax as the popular JavaScript but hides the messy details of sharing and updating data between client and server.

Advanced Micro Devices (AMD)

Sunnyvale, California-based Advanced Micro Devices, Inc., is a major competitor in the market for integrated circuits, particularly the processors that are at the heart of today’s desktop and laptop computers. The company was founded in 1969 by a group of executives who had left Fairchild Semiconductor. In 1975 the company began to produce both RAM  chips and a clone of the Intel 8080 microprocessor. When IBM adopted the Intel 8080 for its first personal computer in 1982 , it required that there be a second source for the chip. Intel therefore signed an agreement with AMD to allow the latter to manufacture the Intel 9806 and 8088 processors. AMD also produced the 80286, the second generation of PC-compatible processors, but when Intel developed the 80386 it canceled the agreement with AMD.

A lengthy legal dispute ensued, with the California Supreme Court finally siding with AMD in 1991. However,as disputes continued over the use by AMD of “microcode”  from Intel chips, AMD eventually used a “clean room” process to independently create functionally equivalent code. However, the speed with which new generations of chips was being produced rendered this approach impracticable by the mid-1980s, and Intel and AMD concluded a  agreement allowing AMD to use Intel code and providing for cross-licensing of patents. In the early and mid-1990s AMD had trouble keeping up with Intel’s new Pentium line, but the AMD K6  was widely viewed as a superior implementation of the microcode in the Intel Pentium—and it was “pin compatible,” making it easy for manufacturers to include it on their motherboards.



Today AMD remains second in market share to Intel. AMD’s Athlon, Opteron, Turion, and Sempron  processors are comparable to corresponding Intel Pentium processors, and the two companies compete fiercely as each introduces new architectural features to provide greater speed or processing capacity.In the early 2000s AMD seized the opportunity to beat Intel to market with chips that could double the data bandwidth from 32 bits to 64 bits. The new specification standard, called AMD64, was adopted for upcoming operating systems by Microsoft, Sun Microsystems, and the developers of Linux and UNIX kernels. AMD has also matched Intel in the latest generation of dual-core chips that essentially provide two processors on one chip.

 Meanwhile, AMD strengthened its position in the high-end server market when, in May 2006, Dell Computer announced that it would market servers containing AMD Opteron processors. In 2006 AMD also moved into the graphics-processing field by merging with ATI, a leading maker of video cards, at a cost of $5.4 billion. Meanwhile AMD also continues to be a leading maker of flash memory, closely collaborating with Japan’s Fujitsu Corporation . In 2008 AMD continued its aggressive pursuit of market share, announcing a variety of products, including a quad-core Opteron chip that it expects to catch up to if not surpass similar chips from Intel.

Sunday, December 6, 2009

Adobe Systems

Adobe Systems  is best known for products relating to the formatting, printing, and display of documents. Founded in 1982 by John Warnock and Charles Geschke, the company is named for a creek near one of their homes. Adobe’s first major product was a language that describes the font sizes, styles, and other formatting needed to print pages in near-typeset quality . This was a significant contribution to the development of software for document creation , particularly on the Apple Macintosh, starting in the later 1980s. Building on this foundation, Adobe developed high-quality digital fonts. However, Apple’s TrueType fonts proved to be superior in scaling to different sizes and in the precise control over the pixels used to display them. With the licensing of TrueType to Microsoft for use in Windows, TrueType fonts took over the desktop, although Adobe Type 1 remained popular in commercial typesetting applications. Finally, in the late 1990s Adobe, together with Microsoft, established a new font format called OpenType, and by 2003 Adobe had converted all of its Type 1 fonts to the new format. Adobe’s Portable Document Format  has become a ubiquitous standard for displaying print documents. Adobe greatly contributed to this development by making a free Adobe Acrobat PDF reader available for download.

Image Processing Software:-
In the mid-1980s Adobe’s founders realized that they could further exploit the knowledge of graphics  rendition that they had gained in developing their fonts. They began to create software that would make these capabilities available to illustrators and artists as well as desktop publishers. Their first such product was Adobe Illustrator for the Macintosh, a vector-based drawing program that built upon the graphics capabilities of their PostScript language. In 1989 Adobe introduced Adobe Photoshop for the Macintosh. With its tremendous variety of features, the program soon became a standard tool for graphic artists. However, Adobe seemed to have difficulty at first in anticipating the growth of desktop publishing and graphic arts on the Microsoft Windows platform. Much of that market was seized by competitors such as Aldus PageMaker and QuarkXPress. By the mid-1990s, however, Adobe, fueled by the continuing revenue from its PostScript technology, had acquired both Aldus and Frame Technologies, maker of the popular FrameMaker document design program. Meanwhile PhotoShop continued to develop on both the Macintosh and Windows platforms, aided by its ability to accept add-ons from hundreds of third-party developers .

Multimedia and the Web:-
Adobe made a significant expansion beyond document processing into multimedia with its acquisition of Macromedia in 2005 at a cost of about $3.4 billion. The company has integrated Macromedia’s Flash and Dreamweaver Web-design software into its Creative Suite 3 . Another recent Adobe product that targets Web-based publishing is Digital Editions, which integrated the existing Dreamweaver and Flash software into a powerful but easy-to-use tool for delivering text content and multimedia to Web browsers. Buoyed by these developments, Adobe earned nearly $2 billion in revenue in 2005, about $2.5 billion in 2006, and $3.16 billion in 2007. Today Adobe has over 6,600 employees, with its headquarters in San Jose and offices in Seattle and San Francisco as well as Bangalore, India; Ottawa, Canada; and other locations. In recent years the company has been regarded as a superior place to work, being ranked by Fortune magazine as the fifth best in America in 2003 and sixth best in 2004.